<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[debloper - Medium]]></title>
        <description><![CDATA[From astrophysics to metaphysics; original researches and thought experiments living outside of the Overton window. - Medium]]></description>
        <link>https://medium.com/debloper?source=rss----a3e1a3a13882---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 20:24:48 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/debloper" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Intuitive Time]]></title>
            <link>https://medium.com/debloper/intuitive-time-b0f465d215f6?source=rss----a3e1a3a13882---4</link>
            <guid isPermaLink="false">https://medium.com/p/b0f465d215f6</guid>
            <category><![CDATA[explainer]]></category>
            <category><![CDATA[spacetime]]></category>
            <category><![CDATA[astrophysics]]></category>
            <category><![CDATA[time]]></category>
            <category><![CDATA[cosmology]]></category>
            <dc:creator><![CDATA[Soumya Deb]]></dc:creator>
            <pubDate>Thu, 11 May 2023 10:19:53 GMT</pubDate>
            <atom:updated>2023-05-11T10:19:53.498Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9XZKPSuTqhmDgyocp8nsFw.jpeg" /></figure><h4>You’re a satellite and the orbital radius is time!</h4><p>You are going around a conceptual hyper-celestial body. Don’t worry, it’s not only you… me too. Everyone is. In fact, all the matter in the observable universe is doing the same.</p><p>The surface of that hyper-celestial body is made of event horizon, and it encloses the singularity at its center.</p><p>&gt; HOL’UP A MINUTE… Isn’t that a blackhole?</p><p>&gt;&gt; It is!</p><p>&gt; Which one?</p><p>&gt;&gt; All of them!!! Black holes are nothing but different windows to the exact same room… we’ll cover that topic another day [TODO: link back]. Here, we’re talking about that room itself, and its internal dimension!</p><p>Now, back to the satellite analogy:</p><p>Just imagine that in this analogical 3D representation of 4D spacetime, we’ve flattened out the spatial dimensions for simplicity, and using a time based polar coordinate — to make up a human-brain-tangible 3D analog of the spacetime.</p><h3>Acceleration &amp; Time Dilation</h3><p>As a satellite, as long as you move at a constant rate (no acceleration)… you are governed by inertia &amp; maintain the exact same radius of orbit.</p><p>Forever and ever.</p><p>But the moment you try to accelerate (or r̵e̵t̵… deaccelerate) you either get to a higher orbit (i.e. acquire longer orbital radius or escape potential)… OR, you fall down to a lower orbit. If you keep accelerating, eventually you’ll come crashing down to the surface.</p><p>Something similar goes for time (i.e. the orbital radius, in this analogy)… faster† you move in spacetime, more dilated (expanded) your perceived time gets; and slower you move, your perceived time gets contracted compared to a fellow satellite that’s maintaining static orbit.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3w76lUZ4MAN5NK8dDUFy0w.jpeg" /><figcaption>Speeding pushes you out of the trench… slowing down spirals you in!</figcaption></figure><p>If you go down to a lower orbit, you need more energy to maintain the orbit, or you fall down… so you have to orbit faster. Opposite happens if you go higher up in the orbital distance.</p><p>That’s why in the real world, bodies close to a blackhole needs extra oomph to stay away from falling in, but for farther away, they can chill and do whatever without any immediate risk of falling in.</p><p><em>† Yes, we’re talking about speed here in analogy, not acceleration - because we compacted spatial dimensions, and working with time remember? So to use time here as a spatial/space-like dimension of 3D geometry… the concept of “something per second” gets weird and comes free with the package etc. etc. boring details… Nobody cares!</em></p><h3>Surface radius &amp; cosmic speed limit</h3><p>If you keep going down to a lower and smaller orbit… The speed you need to maintain the orbit the goes higher and higher, and eventually at the surface, you call that speed the escape velocity (yes, that’s literally the definition of escape velocity).</p><p>On the analogy side of things, when the same happens, and you reach the surface, you call the distance from singularity the cosmic speed limit <a href="https://en.wikipedia.org/wiki/Speed_of_light"><strong><em>c</em></strong></a>††.</p><p>In fact… I lied a little in the beginning… you’re not actually orbiting, you’re hugging the surface and crawling around. A space-bound rocket (in real world) is analogous to an aero-plane taking off… no longer hard-bound crawling the surface, but far from being able to orbit at that speed.</p><p>Your distance from the singularity is exactly equal to the radius of the body itself… and it seems like you’ve hit the floor (quite literally) &amp; you can’t get any closer to it, no matter how much you try.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/850/1*dymSquL0OpolXM8YPBBACQ.png" /><figcaption>The Weyl curvature invariants (polar representation). Image Credit: ResearchGate</figcaption></figure><p>Orbits for faster than escape velocity doesn’t exist… I mean, they exist (on paper) at an even smaller radius… meaning, below the surface. So the very reason there’s no underground satellites… is the same reason why we can’t go faster than <strong><em>c</em></strong>.</p><p><em>†† Yes, again… we’ve compacted the time dimension as a spatial dimension here… so acceleration in real world is speed here, and speed in real world is distance. </em><a href="https://www.youtube.com/watch?v=sXE8LdXzeHM"><em>Get on with it</em></a><em>!</em></p><h3>Fossilization, Penrose diagram &amp; Flippin’ spacetime</h3><p>But wait… I may have — yet again — lied a little.</p><blockquote>you can’t get any closer to it, no matter how much you try</blockquote><p>Actually you can!</p><p>By being stationary in space, and allow time time swallow you in and gradually drag you down towards the center.</p><p>Don’t believe me, ask all the fossils!</p><figure><img alt="Sedimentary Strata… an analog for “what’s inside the event horizon?”" src="https://cdn-images-1.medium.com/max/1024/1*rRvawLjDGx15N53u6ATCsw.png" /></figure><p>Above the surface, you can move in all the spatial direction, but you can’t traverse time so easily… but the moment when you’re fossilizing and are subject to the rules of the subsurface, you’re no longer allowed to traverse in any spatial direction… you can just gradually get dragged in time, embedded into the layers of strata.</p><p>Once you’re fossilized, you realize… that your partner who is fossilized a meter away… is now an impossibly unreachable amount of distance apart. Whereas the T-Rex that died on the same spot… millions of years apart from you… are practically tickle buddies, rubbing bones against each other.</p><p>This is not new or groundbreaking (not anymore, that is)… if you’re familiar with Penrose diagrams, you already have some idea about what I’m talking about. But hopefully this gave you another perspective!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hdZT8IGJBVlCXGgM_UPADQ.jpeg" /><figcaption>Extended Penrose Diagram. See how the blue and red lines change coordinate… Image Credit: PBS SpaceTime</figcaption></figure><h3>Disclaimer</h3><p>This article is like my 4 year old is learning to spell.</p><p>I asked her to spell blue, she intuitively spelled B-L-U.</p><p>I had to accept it and move on &amp; save correcting her for some other time.</p><p>Intuitive understanding has its limits. It doesn’t fill in the details very well. But it does give a quick first cut of a baseline. The better way to use it to neither throw it away for incompleteness or call the missing details pedantic. Let the details gradually enrich the intuition over time.</p><p>Until then, for all intents and purposes… this article is B-L-U.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0f465d215f6" width="1" height="1" alt=""><hr><p><a href="https://medium.com/debloper/intuitive-time-b0f465d215f6">Intuitive Time</a> was originally published in <a href="https://medium.com/debloper">debloper</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[So’ham]]></title>
            <link>https://medium.com/debloper/soham-30f5454740f2?source=rss----a3e1a3a13882---4</link>
            <guid isPermaLink="false">https://medium.com/p/30f5454740f2</guid>
            <category><![CDATA[universe]]></category>
            <category><![CDATA[cosmology]]></category>
            <category><![CDATA[self-actualization]]></category>
            <dc:creator><![CDATA[Soumya Deb]]></dc:creator>
            <pubDate>Wed, 08 Mar 2023 02:09:21 GMT</pubDate>
            <atom:updated>2023-03-12T16:41:42.725Z</atom:updated>
            <content:encoded><![CDATA[<h4>“He’s not the Messiah — he’s a very naughty boy!”</h4><h3>Spacetime is not flat</h3><p>It doesn’t have to be, it never pretended to be, and over millennia always the effort to project a celestial structure to be ‘flat’ has failed miserably. But we simply don’t learn, do we? <a href="https://www.youtube.com/watch?v=2k0SmqbBIpQ">It’s time to stop</a>!</p><p>Minkowski space is a spherical cow.</p><h3>Spacetime is not isotropic</h3><p>Again… it doesn’t have to be, and now* we even have solid evidence in form of CMB that it never was, as far back as our observations can go.</p><blockquote>* read the term “now” as if you’re reading this back in 2015 i.e. when I originally came to this realization, and that was within year-shot of Planck observation confirming &amp; emphasizing WMAP observation…</blockquote><p>Spacetime is, and always has been, terrain-like.</p><h3>There is no dark matter</h3><p>From smaller scale effect of <a href="https://en.wikipedia.org/wiki/Galaxy_rotation_curve">galaxy rotations</a> to increasingly larger scale effects such as superclusters, super voids, great walls can be explained with the anisotropy of the spacetime.</p><p>We introduce concepts of dark matter only to make a half-assed sense of some of these phenomena (especially to enforce the flatness &amp; isotropy of the spacetime, among other things, <a href="https://www.newindianexpress.com/states/karnataka/2022/aug/09/anti-big-bang-theory-scientists-face-censorship-by-international-journals-2485604.html">because the status quo demands so</a>); which is analogous to the several ancient concepts (for example, <a href="https://en.wikipedia.org/wiki/World_Turtle">this</a> or <a href="https://en.wikipedia.org/wiki/Aether_(classical_element)">that</a> and many as such), adding layers of new idiocy to cover up the idiocy of the previous layer.</p><blockquote>Anisotropy of spacetime is easily verifiable with gravitational lensing effects; for e.g. a void/supervoid should show concave lensing effect — which is a property that dark matter can’t demonstrate — unless… hold on… Occam’s Razor be damned… unless, we add another layer of “great” idea — anti-dark-matter, with negative gravity!!!</blockquote><h3>Singularity is unique</h3><p>Simplest way to visualize is by drawing axis on the gravitational wells created by black holes on a 2D representation of spacetime, and showing that as they’re parallel, they meet each other at infinity. The gravitational singularity itself is also positioned at the infinity. Which basically just means all those seemingly individual singularities from different black holes are basically the same, one and only singularity.</p><p>A more complex (but more generic) way to visualize this would be to imagine the gravitational wells as <a href="https://twitter.com/Debloper/status/1561502652705685504">nested funnelform corolla</a>. We are already familiar with one of those, as we call it the <a href="https://en.wikipedia.org/wiki/Expansion_of_the_universe#/media/File:CMB_Timeline300_no_WMAP.jpg">expanding universe</a>.</p><h3>Singularity has two names</h3><p>Big Bang &amp; the traditional Singularity at the center of black holes are not two different entities; they’re the same thing termed differently depending on which side of the event horizon the observer is in.</p><p>I’m inclined to call Big Bang as the <strong>white hole singularity</strong>.</p><blockquote>BTW, that’s also why it’s impossible to find naked singularity.</blockquote><p>It is clearly directional, from spacebound or timebound observers’ perspective (which itself is relative depending on, which side of the singularity the observer is trying to perceive). Lightlike observers see themselves as geodesics.</p><blockquote>Explain: <em>x = ±ct</em> i.e. the interval <em>s</em> either is on the same side, or on the other sides of the event horizon; if it’s the former for a particle, then it’s the later for its antiparticle (analogically similar to 2 travelers setting off on opposite directions on a latitude line, at same speed, from any arbitrary point — by the time they meet each other, only one would have had crossed the international date line). The longitude/latitude analogy can also be extended to represent Feynman diagram of matter-antimatter annihilation (along a latitude), and radiating photon (over the longitude).</blockquote><h3>Causality is a loop</h3><p>Much like the latitudes, causality is a loop. Time can be seen as a phenomena as perceived by an observer bound to a latitude, tracking sun’s motion.</p><p>This analogy can be used to explain:</p><ul><li>why changing latitude requires extra effort (Newton’s first law)</li><li>why does it take more or less time to traverse different latitude (Time dilation)</li><li>why regardless of the latitude the observer is traversing it always takes 24hrs (cosmic speed limit)</li><li>and why at some certain point (rather, seemingly two points) one can experience all the times of the day at once (the singularity).</li></ul><p>The only reason we haven’t yet realized that causality is a loop (yet) is because no one has set sails to the west, hoping to discover India… or, the spacetime analogue of whatever that might be.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ykNdwRY7LQw1_k8CogftiQ.jpeg" /></figure><h3>Dark energy is a misunderstanding</h3><p>Schwarzschild radius is linearly proportional to mass, which is proportional to actual radius cubed. In short, for really large objects, even with menial density (for example, <a href="https://twitter.com/Debloper/status/1561502648104534017">the observable universe</a>) the actual body will be totally enclosed inside its Schwarzschild radius.</p><p>A 13.8 billion lightyear sphere with a density of 9*10^–27 kg/m^3 would seem like a black hole with 93 billion lightyear diameter (which we call the observable universe), which no longer follows the intuition of energy density, so we pushed the <em>dark energy</em> to fill in the void that never existed.</p><p>Any linear change in the local density (due to the curvature or terrain like structure of spacetime), will seem exponential, if the observer is tracking the hypothetical Schwarzschild radius (i.e. the observable universe), because it can’t intuitively track the “actual radius” (which is 13.8 billion lightyears away on temporal dimension, perpendicular to the 3D space).</p><h3>The universe is a self-actualization</h3><p>It’s looped back into itself, much like an apple-balloon, that works as a Möbius strip and we’re part of its self-actualization process.</p><p>BTW, it also seems like the apple balloon shape would incorporate the 2 traversal is needed to come back to same position, as one inside, one outside baloon skin (event horizon), with crossover point being the singularity at the center (where the two ends meet).</p><p>TODO: add animation to explain this point.</p><h3>It’s possible to send information through event horizon</h3><p>It may seem like any no causally significant information is possible to be sent past the event horizon; with a simple thought experiment, we can show, that it’s actually possible to send enough information, to make the observers on the other side realize that something is not quite right.</p><p>Let’s keep chucking all the matter inside a black hole, carefully keeping aside all the antimatters. Let’s consider those matter eventually (how about 13.8 billion years) end up becoming intelligent life-forms that understands the concept of matter &amp; antimatter, and are quite curious why is there so much abundance of matter, and not enough antimatter to make up for it.</p><p>Of them, a certain someone decrees, that if they ever were to jumpstart the black hole of self-actualization, make sure to fill it up with only the matter and mostly avoid antimatter; so this inherent disproportionality of matter and antimatter itself can work as the encoded message in the fabric of reality (as a note to past/future self).</p><h3>Identifying the arrow of time</h3><p>For anyone in control of a system, sees the system’s entropy to gradually decrease (the system attains its desired state); whereas anyone being controlled sees the entropy of the system to increase (system attains a state that’s incomprehensible as an orderly desired state to that observer).</p><p>We seem to be at the inflection point where we’ve started to control outcomes, even if at a limited quantity presently, but at an exponential rate — which means we’re headed towards the design &amp; execution phase for the self-actualization of the universe.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=30f5454740f2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/debloper/soham-30f5454740f2">So’ham</a> was originally published in <a href="https://medium.com/debloper">debloper</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Beyond Godlike]]></title>
            <link>https://medium.com/debloper/beyond-godlike-55f6dc26ae5f?source=rss----a3e1a3a13882---4</link>
            <guid isPermaLink="false">https://medium.com/p/55f6dc26ae5f</guid>
            <category><![CDATA[science]]></category>
            <category><![CDATA[kardashev-scale]]></category>
            <category><![CDATA[fermi-paradox]]></category>
            <category><![CDATA[causality]]></category>
            <category><![CDATA[metaphysics]]></category>
            <dc:creator><![CDATA[Soumya Deb]]></dc:creator>
            <pubDate>Fri, 01 Nov 2019 00:37:43 GMT</pubDate>
            <atom:updated>2019-11-01T01:18:22.653Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<p>Bad decisions make good stories. Hostile-by-default-on-contact based plots in pretty much all the alien sci-fi — from comic books to novels to movies — subscribe to this notion over and over again.</p><p><strong>An exocivilization advanced enough to come in contact with another civilization will never explore the possibility to meet each other on unfriendly terms.</strong> We discussed this quite thoroughly in <a href="https://medium.com/debloper/godlike-7e9939385d38">the previous part</a> of this two-part series on assimilating Fermi Paradox.</p><p>By the end of the first part, we narrowed down the prospective possibilities in three general buckets:</p><ol><li><strong>We’re in a simulation.</strong> Nihilistic, and unworthy of discussion.</li><li><strong>We’re the eventual alpha-civilization.</strong> Optimistic edge-case, the most extreme &amp; unfalsifiable god-complex, discussed in first part.</li><li><strong>We’re an adolescent member of an interconnected network of symbiotic supercivilizations.</strong> Realistic(!) generic-case, free from the base assumptions of #2, and thus most abstract &amp; hardest to approach.</li></ol><p>So, naturally, we have to approach it. In this article.</p><h3>A quick recap</h3><p>In <a href="https://medium.com/debloper/godlike-7e9939385d38">the previous article</a>, we had three special considerations, which lead to its conclusion, that we are the supercivilization that eventually becomes the gatekeeper civilization of our Universe.</p><p>Those special considerations were:</p><ul><li>Single outcome</li><li>Exclusivity of the extreme</li><li>Hostile-by-default interactions</li></ul><p>However, those considerations are only boundary conditions added for simplicity, and not necessarily fundamental. A generic solution will have to be unbounded by these conditions, and that’s what we’re upto next — one step at a time.</p><h3>A detour into the Multiverse</h3><p>A phase space is where all the possible states of a system are available. Let me try to (over) simplify the concept. Similar to when you were introduced to the concept of time as a dimension — you have to break out of your pit of common sense to get a grasp of it. Our perceptual 3D world is filled with all the “<em>where</em>” — but time as a dimension elongates along all the “<em>when</em>” — if punched together, in spacetime, both work in tandem. Phase space, in this analogy, will then be an ensemble of all the “which” of that spacetime.</p><p>To avoid the technical differences while taking an analogy too far, and also to avoid getting beaten up on the streets by theoretical physicists — I’m going to stop using the term phase space, and rather call this something more friendlier, and familiar — <strong>Multiverse</strong>.</p><p>You see a cube of cheese in front of you. In a (quantum) multiverse, that cheese is:</p><ul><li>not there (that milk was used to make yogurt instead)</li><li>is infested with fungus</li><li>is partially eaten by a mouse</li><li>is about to be grated for a meal</li><li>… etc.</li></ul><p>All of the above.</p><p>Starting from what’s called a “set of initial conditions” (read: most fundamental laws of the host universe), each universe in the multiverse contains all the possible iterations of eventualities.</p><p>From forcing particles to go through a particular slit in a <a href="https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser">double-slit experiment</a>, to deciding whether or not to drop out of school — whenever a causal relation is forced by choice, that worldline in the multiverse is split across all the probabilistic outcomes. Yet, it collapses into a single outcome, differently, for the observers in each of those universes.</p><p>So, we see only one outcome to take place. It’s just because we have no access to the universes which had different outcomes. Each of the other universes go on about their own ways as the <a href="https://en.wikipedia.org/wiki/Butterfly_effect">butterfly effect</a> of that event, until something else makes it split again.</p><p>The probability of universes where Trilobites thrived, or Dinosaurs celebrated their industrial age, or Neanderthals colonized the galaxy aren’t too different than that of Homo Sapiens becoming a universal supercivilization.</p><p><strong>So, we can see, that the ‘single outcome’ that we’ve previously based on, doesn’t necessarily apply at a higher level.</strong> We have to assume all the supercivilizations that are possible, within the scope of our universe’s starting conditions, and theorize their interactions from there.</p><h3>A peek into the superbrains</h3><p>There’s a famous Tagore poem that goes:</p><blockquote>উত্তম নিশ্চিন্তে চলে অধমের সাথে।<br>তিনিই মধ্যম যিনি চলেন তফাতে।।</blockquote><p>Loosely translates to: “<em>The superior feels fine hanging out with everyone (even the most inferior). Mediocre is the one who (has to) maintain safe distances.</em>”</p><p>Feeling of insecurity, uncertainty, fear… defaulting to hostility on contact etc. are not the attributes of the alpha.</p><p>Take the example of modern humans at current time. We’ve evolved enough to come to a point where we’re not threatened by any other species. Doesn’t matter whether they have longer fangs, sharper claws, lethal venom — we as a species don’t really feel at survival risk from them. In fact, we take it upon ourselves to ensure the other species don’t go extinct, or even have them as pets/companions. It’s silly to even presume having a serious enmity/grudge against some other flora/fauna. So much so, that species that are very close to us on evolutionary scale, like Chimps, Gorillas etc, we love to observe/study their cognitive growth potential with adoration &amp; admiration.</p><p>There’s no reason for this interaction between developed vs underdeveloped would have to be any different at a higher abstracted level of cognition. The civilizations that crossed the great-filter would consider themselves as developed, and the ones yet to cross it (like current human civilization) as underdeveloped. All the supercivilizations that crossed the great-filter wouldn’t necessarily be identical — much like human races and demographic distributions; but they’ll still identify with each other at a level plane, than they would with other underdeveloped civilizations.</p><p><strong>This implies that the “hostile-by-default interactions” assumption is not only a special case, it’s most likely a false one in practicality.</strong></p><p>It’s lonely at the top, and whoever is at the top would be glad to have someone by their side, someone to talk to, someone who can comprehend them. They’d ensure the great filters are properly in place to determine which kind of civilizations are allowed in, filtering them out of existence if they don’t qualify (pest-control). But the ones who do eventually qualify shouldn’t expect hostility from them.</p><p><strong>Which, in turn, also implies that the “exclusivity of the extreme” doesn’t hold true anymore.</strong> Cause it turns out to be a dependent variable of hostile interactions.</p><h3>Summing it up, so far…</h3><ul><li>It’s more likely that there are innumerable supercivilizations out there than it’s likely that there isn’t</li><li>They’re mighty enough to be able to manipulate <a href="https://en.wikipedia.org/wiki/World_line">worldlines</a> at will</li><li>They’re most likely connected together by a common consortium</li><li>They look forward to welcoming other upcoming civilizations, worthy of joining the table</li></ul><p>Some indirect predictions based on these would be:</p><ul><li>They (probably) take precautions to prevent unworthy civilizations from reaching super status by means of implementing the great filters as spacetime engineering, to keep the consortium in order</li><li>A civilization like human civilization — contrary to the usual analogies of it being at its infancy — actually might be embryonic. The birthing process would likely be analogous to reaching <em>the era of temporal engineering</em>. It may sound ludicrous, but similar to crossing oceans on wooden ships on wind, crossing event horizon will eventually become a reality — probably sooner than the most optimistic of expectations.</li><li>The universe we perceive is (probably) just a hatchery (or, one of the hatcheries) with limited constraints, sustenance, observation and safety like that of an womb. It’s a controlled environment, which is (probably) why we see the lucky Goldilocks zones of “<a href="https://en.wikipedia.org/wiki/Fine-tuned_universe">just right</a>” sequence of events.</li></ul><h3>Yet another detour through embryonic analogy</h3><p>Let’s consider, a human foetus achieved superconsciousness. It dabbled into the “am I alone?” kinda questions for a while, then got to work to find it out.</p><p>Through rigorous process of calculating the amount of stuff (let’s say, placental enzymes) it’s releasing into the outgoing bloodstream, and the density of them in the incoming bloodstream, it comes to the conclusion that the amount of blood required for this has to be about 5 litres, and the body to support that has to be about 60kg. But it can only account for about 250ml of blood and 3kg of its body mass.</p><p>That’s just 1/20th of what it can perceive. Where are the rest of the stuff?!?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/578/1*WWVlQqmGYcREMb8lu9VfXg.png" /><figcaption>ref: <a href="https://chandra.harvard.edu/resources/illustrations/darkmatter.html">https://chandra.harvard.edu/resources/illustrations/darkmatter.html</a></figcaption></figure><p>Funnily and coincidentally enough, that seems a lot like our perception of the universe and its known to unknown ratio (i.e. ~5%).</p><p>The existence itself might seem very scary, lonely, and insanely lucky fluke to the unborn baby. Add to the fact it counts events like, when it needed bones to develop there was a sudden surge of calcium, and when it needed the lungs to develop, there was a surge of corticosteroids in the incoming bloodstream. It thinks… what are the odds!</p><p>Now, think of the mother/parents and doctor(s) expecting the baby. How do they look at it. What can they do to explain it to the baby what all is happening? Not much. Neither can/would they pull the baby out into the world prematurely. All they can do is to ensure the best care for a healthy birth. And that’s it.</p><p>The baby eventually, over the course of childhood, will find out that — no, it’s not alone, there are 7 billion of them out there. Some of them not only loves it, but worked really hard to ensure its well-being when it wasn’t self-capable.</p><p>The current state of human sentience is a LOT like this. Isn’t it?</p><h3>The sentience that lived</h3><p>Coming back to the main thread, we got upto the point that we’re an embryonic civilization, on our way to the birthing process, (likely) under the care of supercivilization(s), who can’t wait to see us grow up.</p><p>Now, what are they looking at in terms of growth? The humans?</p><p>Likely not… I mean, not only.</p><p>From a bio-chemical soup to single cell organism to multi organ lifeform — if the line is stretched further forward, it doesn’t go towards one winning species in particular, as one might commonly assume. Even if humans might be the most important/interesting part in this chapter of the multiverse — like the brain in human body — it’s not the only thing that makes the lifeform work as a whole.</p><p>There has to be a very careful symbiotic synergies of all the parts working in tandem, doing their own different parts, but towards a very definite common goal of sustaining life. It’s true for the most basic single cell organism, to the largest single lifeform known to us. Any anti-synergic mutation will eventually be shaken off — as George Carlin would put it — “<a href="https://youtu.be/7W33HRc1A6c?t=259">like a bad case of flees</a>”.</p><p>There’s no reason it will be any different at a higher abstraction level.</p><p>Perhaps <a href="https://en.wikipedia.org/wiki/Great_Filter">the great filter</a> is — in itself — a civilizations’ own limitations, failing to work as a single unified unit, with all the lifeforms, species, races and cognitive &amp; technological progress etc. working as different organs of a single body.</p><p>A civilization that fails to harmonize itself, is obviously too disastrous &amp; incompetent to be trusted with even greater power, so it’s best to let go of them. Like discarding the weaklings at the <a href="https://en.wikipedia.org/wiki/Agoge#Structure">Spartan Agoge</a> process, or maybe a <strong>cosmological abortion</strong>, if you will.</p><p>Would we be able to cross the great filter, or are we going to be among the statistical majority of failed efforts? We can only try our best at making the right choices, so it leads towards the most prospective world line for us all.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YWPbgiYUIacYkDG9GUX4Fw.jpeg" /></figure><h3>Closure</h3><p>When I started thinking on this topic, years back, I had thought of writing an actual book on this, explaining everything I wanted to. But, I realized it’s very unlikely that someone would actually read it, at least in foreseeable future. So, I got this out as a two part thought experiment.</p><p>Maybe one day I’ll come back to read it again to see if the logic still holds true.</p><p>Until then, if you ask me, if I believe in higher power, if I believe in intelligent design, if I believe in God — <em>this</em> will remain as my answer to that question.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=55f6dc26ae5f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/debloper/beyond-godlike-55f6dc26ae5f">Beyond Godlike</a> was originally published in <a href="https://medium.com/debloper">debloper</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Godlike]]></title>
            <link>https://medium.com/debloper/godlike-7e9939385d38?source=rss----a3e1a3a13882---4</link>
            <guid isPermaLink="false">https://medium.com/p/7e9939385d38</guid>
            <category><![CDATA[fermi-paradox]]></category>
            <category><![CDATA[kardashev-scale]]></category>
            <category><![CDATA[causality]]></category>
            <category><![CDATA[time-loop]]></category>
            <category><![CDATA[metaphysics]]></category>
            <dc:creator><![CDATA[Soumya Deb]]></dc:creator>
            <pubDate>Sun, 01 Sep 2019 18:50:23 GMT</pubDate>
            <atom:updated>2022-12-08T22:09:17.376Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<blockquote>Where is everybody?</blockquote><p>Most of the common approaches to address the <a href="https://en.wikipedia.org/wiki/Fermi_paradox">Fermi paradox</a>, and the assumptions around how exo-civilizations are likely to interact with others, are too rudimentary for any alien civilization advanced enough to stumble across another one.</p><p>Fermi paradox is only paradoxical (if at all, it’s) because of our poor understanding of <a href="https://en.wikipedia.org/wiki/Causal_loop">temporal causality</a>. Its premises, as well as most of the possible explanations rely heavily on time being linear, orthogonal &amp; unidirectional. They seem to make some sense “going forward” but fail miserably to explain the same events in retrospect.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*r60EJfj4ymrp60wdaP0z-A.png" /><figcaption>We tend to heavily miscalculate the civilization growth rates. We imagine future growth to be linear &amp; even lag behind estimating our own current state (which we do based on our recent past &amp; not in real time).</figcaption></figure><p>For example, it’s just been over 100 years since practical manned-flight, and we’ve already sent probes past the terminal shock of solar system. By the time we are to celebrate the centenary of the first transatlantic flight, some among us would raise the glass from Mars after completing the first interplanetary trip. Progress is exponential.</p><p>It’s when one lays down a reasonable explanation that holds up both proactive and retroactive causality, while considering the exponential nature of growth, the paradox itself seizes to exist.</p><p>Let me explain why…</p><h3>The Socratic Method</h3><p><strong>Let’s consider</strong>, for the sake of argument, there <strong>is</strong> an arbitrary civilization that’s about to reach galactic dominance. It made through all the prior <a href="https://en.wikipedia.org/wiki/Great_Filter">great filters</a> by insane luck and their composure over technologies — avoiding self-destruction.</p><p>Let’s put a hypothetical barrier that lies ahead of them: <strong><em>control over time</em></strong>. Let’s say, this civilization hasn’t achieved this yet, and thus, are still limited by the vastness of intergalactic space &amp; the event horizon(s) of observable universe. All the information they can consume from all the corners of their observable universe are still limited by <strong><em>c</em></strong> (speed of light in vacuum, cosmic speed limit).</p><p>Even if they had incredible observing tools/utilities, they’re still locked down to a causal locality. Which means, they’re limited by a causal latency — how much spacetime radius they can collect information from and respond react if necessary, in a causally effective manner. For e.g. to react to an event that’s 1 light year away, the information is received after a year, and any immediate countermeasure will need another year to take effect (as best case scenario). Longer radius will have progressively worse latency.</p><p><strong>Let’s also consider</strong>, according to the predominant norm in most of the explanations of Fermi paradox (and alien movies), that they’re threatened by the possible intelligent civilization(s) elsewhere in the universe, and are hostile towards them.</p><p>So far, we’ve assumed nothing beyond the ordinary (as it’s widely believed), and realistically only have a <strong>very limited boolean choice</strong>.</p><p>What would be this civilization’s next immediate step? <strong>Take your pick</strong>:</p><ol><li><em>Go all in on intergalactic intelligence &amp; weaponry — seek to destroy all other possible intelligent lives light-years away, OR</em></li><li><em>Invest into taking complete control over spacetime, master luminal/superluminal travel, break the next immediate limiting barrier</em></li></ol><p>A civilization that hasn’t learned to control spacetime just yet, and are scared of being attacked by an alien civilization, at a macro level, can only opt to do one of those two things to ensure continued survival. Either get ahead of the competition, or try to eliminate the competition.</p><p>We tend to assume a choice that we’re familiar with through the course of our limited recorded history and tend to think it works the same way at a supermassive scale, consisting not only distance, but time as well.</p><p>Have you made your choice? Good. Let’s discuss both the situations:</p><h4>Case 1: Intergalactic intelligence &amp; weaponry</h4><p>This seems to be the most obvious choice we tend to make, because that’s what history teaches us. A lot of discussions about Fermi paradox revolve around this. But <strong>this is an absolutely futile effort</strong>.</p><p>If they chose to wage war against the distant civilizations, they have to do it within immediate/foreseeable present/future. Let’s say, at near light speed they can send their battleships to the other civilizations in a short amount of time (but not zero time).</p><p>There are three sub-situations that may arrive:</p><ol><li><strong>The other civilization is weaker than them:</strong> in which case, it’s safe to assume that the other civilization were to cross the luminal barrier later than them, had they invested into it instead. Detailed in Case 2.</li><li><strong>The other civilization is comparable to them:</strong> in which case, they’re giving the other civilization more time to prepare no matter how small Δt is, it works as an advantage for the defending civilization. Try to imagine the exponential growth which defending civilization will have the benefit of, but the attacking troops grossly won’t. So whoever is proactive to wage war is at a disadvantage.</li><li><strong>The other civilization is stronger than them:</strong> &lt;leeroy_jenkins.mp4&gt;</li></ol><p>So, it’s safe to assume all the possibilities in Case 1 lead to the same conclusion — it’s either unnecessary, disadvantageous or straight up destructive to proactively go into a fight vs. another civilization within approachable spacetime (beyond a certain distance of a light year or so, as a practical estimation). Further they are apart, the risks get larger.</p><p>What about exploring case 2 then? Turns out, it allows safer route to not only the first sub-situation, but it effectively causes all other situations to collapse into the first sub-situation.</p><h4>Case 2: Taking control over time</h4><p>If the accelerated return of technological growth is anything to go by, big-leap-forwards take less and less time as we go <em>forward</em>. For an incomprehensibly sophisticated civilization, the big leaps will not be spaced out in centuries or decades. They’ll start popping by the weeks, days, hours.</p><blockquote>Update Dec 2022: we are already at a point where the leaps are coming by weeks. A month of digital detox causes a significant dissonance to get back on track (yes, I tried). What seemed like at least 5 years into the future last December — in terms of LLM &amp; generative AI — is already a reality now. 2023 is just gonna get crazier.</blockquote><p>So, crossing this hypothetical barrier, for a post-singularity civilization, would take a reasonable amount of time but that’d be much smaller than what we can now predict it to be; and a few order of magnitude smaller than the time it’d take to wage galactic wars.</p><p>Taking control over time will be the last big-leap-forward. Because after that, the forward and backward stops making sense. They become a continuum with different pathing options.</p><p>Anyway, once a civilization takes control over time, they practically become omnipresent by our universe’s set of initial conditions. They can t̵r̵a̵v̵e̵l̵ teleport to anywhere/anywhen. Similar to a photon, the universe becomes a standstill <a href="https://en.wikipedia.org/wiki/Spime#Origin">spime</a>, and you get to decide at which point in spacetime you wish to be/pop at.</p><p>They can destroy any other possible civilization that can/may/will pose threat to them, and reorder the timeline that fits them well. I say destroy, but the actuation process is more of <em>preventing from happening</em>, but I believe that’s implied.</p><p><strong>So, the first civilization to take control over time themselves work as the great barrier/censorship body that decides which all other civilization(s) are to be allowed to prosper.</strong></p><p>So, even if we start with the common assumptions, and apply causal interactions limited by <strong><em>c</em></strong>, we eventually arrive at a single conclusion that seems to be the resultant synthesis.</p><p>Are we good till here?</p><p>Cool.</p><h3>Reverse Temporal Causality</h3><p>We’ve established that going to fight alien civilizations without taking complete control over spacetime is a stupid idea. But, we must also verify the whole event (Case 2) in retrospect, whether it stays consistent and if so, then what it implies.</p><p>Let’s try to refute the point made with Case 2 and see if there are plot-holes in it similar to most other explanations, with the help of reverse temporal causality. We’ll have to assume</p><ol><li>The existence of a superintelligent civilization, that has overcome the great barrier of temporal causality.</li><li>They intend to keep their universal domination.</li></ol><p>For the first assumption, it doesn’t matter when they’ve achieved it — past, present or in the future (from our frame of reference). All that matters is that if they achieve it, they can have retroactive interactions with the chronology of the universe to ensure the most promising return for themselves.</p><p>The second assumption is a little bit off the norm concept to understand. For a spatial (and temporally limited) civilization, the best way to clear off pathways is to destroy what’s blocking it — take the analogy of an ant-colony where a bridge is planned to be built. But for a civilization with control over temporal dimension, the easier way is to not let the ant-colony form there in the first place.</p><blockquote>Seems crazy? It’s not… we do that routinely ourselves, even now. Each time we clean our house, we proactively deter any mold from ‘colonizing’ its walls &amp; floors. You never waged war or went on a killing spree to remove the fungi of your work-desk… you simply never allowed it to happen.</blockquote><p>Which is to say, <strong>if human civilization</strong></p><ol><li><strong>was ever to get in the way of another supercivilization</strong>, then human civilization’s chronology would’ve been moved in a way that we don’t get in their way (best case scenario), or revoke human civilization from ever forming (worst case scenario).</li><li><strong>was ever to pose threat to another supercivilization</strong>, then the human civilization is most likely to be revoked from ever actualizing.</li></ol><p>There are a lot of sub-clauses and scenarios that come in here if we were to analyze their intent in full spectrum (of our limited understanding). But the overarching recurring and consistent solution set here is — no matter how much we develop further</p><ol><li>either, we don’t step on another supercivilization’s foot — ever</li><li>or, we (will) get along fine with them</li><li>or, we don’t exist</li></ol><p>We very much exist (cue Descartes). So for within the reach of the probability space, one of the first two solutions are in effect. Which means, <strong>we are never going to face off another supercivilization on unfriendly terms</strong>.</p><p>Ever.</p><p>No matter how much technologically evolved and threatening we become.</p><h3>We’re not in danger</h3><p>WE. ARE. THE. DANGER.</p><p>And that appears to be true for either interpretation of the statement.</p><p>The first interpretation is: <strong>we become the great gatekeepers</strong>. The first civilization to unlock temporal-manipulation (time-travel), create/ensure the necessary initial conditions for our civilization to be born, develop &amp; to get to that point of universal domination.</p><p>The second interpretation is: <strong>the only reasonable way to destroy our civilization is from within, and not from external influence</strong>. If we were to go extinct, it will not be on the hands of another supercivilization, but because we were too incompetent to become a supercivilization ourselves.</p><h4>Looking through the optimistic filter</h4><p>We are going to conditionally ignore the second interpretation, because on the <a href="https://en.wikipedia.org/wiki/Phase_space">phase space</a>, the universe where dinosaurs weren’t killed in time, or we nuked ourselves to death, or AI destroyed humanity are all equally likely scenarios. And so is human civilization becoming a Godlike supercivilization. Just yet another point in the phase space.</p><p>The probability of us (eventually) reaching a supercivilization status — as long as it’s non-zero, which it seems to be — is not lesser, from where we stand now, than the combined probability of the position of earth in the solar system, with high availability of liquid water, conditions to form of amino acids, radical change of the atmosphere due to cyanobacteria, multiple extinction events eliminating heavyweight competitions, and multiple prospective disastrous events that needed averting to get here in the first place (i.e. the factors behind <a href="https://en.wikipedia.org/wiki/Fine-tuned_universe">Fine-tuned Universe</a> hypothesis).</p><p>Isn’t it uncanny that the most-likely outcomes of any of those events somewhat got suppressed, and only the infinitesimally rare outcomes leading us to the most promising path of eventually becoming a supercivilization are the only courses of events as we can see?</p><p>Hypothetically, <strong>is it too far-fetched to assume a causal loop to be in effect</strong>? Through endless trials and errors in probability planes in the phase space, once we arrive at the point of control of the probability itself, we ensure the other probabilities cease to exist, leading to that point of control being the only probabilistic outcome from the set of initial conditions.</p><p>From that angle, the argument of us being “<em>too incompetent to become a supercivilization</em>” fails, inductively. As it always would have had been… through and through.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lgWYvUIWvvWZNpP6s577Vw.jpeg" /></figure><h3>Intermission</h3><p>To summarize, by applying bidirectional causal consistency, we have greatly narrowed down the discusssions around Fermi Paradox; and our likely place in the Universe.</p><p>Crazy as it sounds… In this entire article, we just barely scratched the surface of a special case (with three following assumptions as boundary conditions). A model centered around our civilization, with assumptions &amp; results that aren’t necessarily generic, like:</p><ul><li>Single outcome</li><li>Exclusivity of the extreme</li><li>Hostile-by-default interactions</li></ul><p>The ultimate reality may not be a linear/looped extreme evolution of a single civilization that has won the universal <a href="https://en.wikipedia.org/wiki/Battle_royale_game">battle royale</a> — that may just be an oversimplified view. It may be an interconnected network of symbiotic supercivilizations, that doesn’t require these assumptions as baselines.</p><p><strong>That’s the topic of our next part in this two-part series.</strong></p><p>See you all in <a href="https://medium.com/debloper/beyond-godlike-55f6dc26ae5f">part two</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7e9939385d38" width="1" height="1" alt=""><hr><p><a href="https://medium.com/debloper/godlike-7e9939385d38">Godlike</a> was originally published in <a href="https://medium.com/debloper">debloper</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Enroute to Hybrid Superintelligence]]></title>
            <link>https://medium.com/debloper/enroute-to-hybrid-superintelligence-e469b4960042?source=rss----a3e1a3a13882---4</link>
            <guid isPermaLink="false">https://medium.com/p/e469b4960042</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[superintelligence]]></category>
            <category><![CDATA[singularity]]></category>
            <dc:creator><![CDATA[Soumya Deb]]></dc:creator>
            <pubDate>Fri, 02 Aug 2019 21:33:29 GMT</pubDate>
            <atom:updated>2025-01-11T21:43:25.800Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nc-sa/4.0/</cc:license>
            <content:encoded><![CDATA[<p><em>This is not a work of scientific literature; this is a thought experiment at best. This article assumes the reader’s prior understanding of (artificial) intelligence, the plausibility, timelines &amp; risk-factors.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*s9aJZL5R7gHOz1RBcHK3NQ.jpeg" /></figure><p><strong><em>TL;DR: the best way to achieve superintelligence is not through ANI → AGI → ASI — it is through a symbiotic/hybrid (human-machine) intelligence iteratively achieving exponential growth.</em></strong></p><h3>Why are we building AGI anyway?</h3><p>Presently, AI status quo is directed towards building a safe AGI.</p><p>The discussions, policies &amp; narratives are dominantly <strong>us vs. them</strong> (humans vs machines). There’s also an insurmountable effort to focus on developing AGI, without enough contingency plans for what happens if/when we get there. Not to mention the incredible amount of social, ethical &amp; political pressure that hinders the progress as if we weren’t slow enough already because of our “human” limitations.</p><p><strong>We’re so focused on building AGI, we’re not stopping to ask the question — why are we doing it anyway?</strong></p><p>Because we won’t get much of a chance to ask that question, closer we get to the point. That’s of course taking the most optimistic view that we do.</p><p>Please, make no mistake that I am against the AI efforts/researches. Quite the opposite. My intention here is to suggest a safer shortcut to arrive at the actual end-game — achieving superintelligence as a species.</p><h3>Superintelligence Barrier</h3><blockquote>The point on cognitive expansion scale where superintelligence is understood at a ground level.</blockquote><p>Not to confuse with <em>techno-cognitive singularity. </em>This barrier is analogous to seeing an island from a ship to confirm its whereabouts (direction/distance/weather etc), whereas singularity is disembarking on the island itself.</p><p>To understand why we’d need to cross this barrier, before we are to have any hope of creating “safe” AI of any scale, we’ll need to take an analogical detour.</p><h4>Brief history of ‘unsafe’ technologies</h4><p>From fire, electricity to nuclear power, there isn’t a lack of examples to produce about gradual process of <em>discovering, assimilating, utilizing &amp; harnessing</em> an unsafe technology.</p><p>Luckily, all those technologies have had— from what it appears retrospectively —limited risk impact towards human civilization as a whole. Except for nuclear power, most other potentially unsafe technologies had local impacts at best. And for nuclear power, we’re all too aware of prospective disasters, even if we scientifically understand a technology fairly well. If we’re not working in proper alignment &amp; harmony as a species to harness that technology, we can get pretty close to destroying ourselves.</p><blockquote>Our collective wisdom of not pressing the red button has to significantly outweigh the trigger-happy giddiness to press that button, even accidentally.</blockquote><p>So it seems, in order to safely harness an unsafe technology, we (as a species) have to get ahead of that technology, as a baseline requirement. There’s no shortcut. And to harness superintelligence, this <em>getting ahead</em> means crossing superintelligence barrier.</p><p>What’s <em>safe</em> gets heavily distorted closer we get to superintelligence (even if artificially), and the boundaries we impose now, may have limited to meaningless implications &amp; other (unforeseen) constraints would start dominating the situation which we will be ill-prepared for. Our lack of preparedness will not only be because we didn’t foresee it. It will mostly be because we, as humans, wouldn’t have evolved much (on cognitive level, in one generation, estimated) beyond our current form, neither are we even suited to evolve that fast anyway at par with an ASI to adapt and realign.</p><p>The present strategy of being optimistic about the positive outcome purely on the basis of probability statistics that such outcome (i.e. non-contradicting alignment of goals) can exists, is very impractical. Especially as there’s no trial-and-error. Like deploying parachute on a skydiving exercise — <em>we get to fail 0 times — </em><strong><em>tops</em></strong>.</p><h3>Overall Preparedness</h3><p>It sounds like an oxymoron to be “prepared” for superintelligence. It’s as if superintelligence were to land on earth, and as the ambassadors of Earth, we we’d have to know the exact handshake sequence it prefers to be greeted with.</p><p>No. The preparedness, in this case, is more about keeping upto the pace with, being able to communicate, reason &amp; align with a superintelligence. YES, that requires superintelligence in the first place.</p><p><strong>In order to sit on the same table with a superintelligent entity (no matter artificial, extraterrestrial or otherwise) we have to be a superintelligent species first.</strong></p><p>So, we arrive at a logical impasse, following along the traditional way of (us vs them) thinking about AI. Proceeding along this path, we’re <strong>assured</strong> to lose control. It <strong>may or may not</strong> go in ‘our’ favor, but we definitely won’t retain the primary influence to affect the outcome.</p><h3>Let’s zoom out a little bit</h3><blockquote>In order to build a safe AGI (which can then turn into ASI), we first have to be superintelligent species ourselves if we were to have any hope of being in control over the situation/outcome.</blockquote><p>Stepping aside of the logical impasse (in order to break out of it), two questions come up:</p><ul><li>Firstly, how’d we become superintelligent in the first place?</li><li>Secondly, if we were superintelligent species already, why’d we want to <strong><em>make</em></strong> artificial superintelligence?</li></ul><h4>Being (super)human</h4><p>This section may deserve an article on its own, but keeping it short for now. Superintelligence — in an abstract sense — can be simply defined as hightened cognitive ability to perceive, process, parallelize, remap, schedule, delegate &amp; respond to stimuli/information. Nietzsche has some digressing points to add to that list, but let’s keep that aside for today’s discussion.</p><p>On evolutionary scale, we <em>humans</em> as a species have already got pretty good at this whole routine, compared to other known lifeforms. But we have to accelerate this process a few notch in a very short time. That’s unattainable with normal biological evolutionary timeframe of thousands of generations for a few percentage of overall cognitive growth.</p><p>We need to start adopting technologies that expand the cognitive horizon, in forms of brain-machine interfaces, in forms of decentralized networks of connected brains, in forms of eradicating everything that we put in the so-called <em>human-limitations</em> bucket. We need to start the process of becoming an unified civilization with clearer overall goals. A civilization, where individuals are like cells of an overall complex singular organism. A civilization that’s not divided into subclusters by geo-socio-political borders, being at odds with each other. <strong>That’s the starting point.</strong></p><p>In such a world, this entire article/hypothesis can be beamed from my neocortex to yours in a few miliseconds, and a decentralized consensus can be achieved at a global scale in a matter of minutes (probably even faster).</p><blockquote>I’m mindful that several authors in various media have tried to serve dystopia on this platter (Huxley’s Brave New World, Star Trek’s Borg Collective, some episodes of Black Mirror etc. to name a few), but we shouldn’t forget two things while processing this: one, those are fictions that has an intent to maximize sell, by broadcasting at the frequency of the mass median crowd where panic/dystopia sells; and two, they come from a worldview &amp; subjective sense of morale of the author, who has predisposition to identify a concept as dystopian &amp; write a tragedy about it — instead of rolling up their sleeves to build solutions around it.</blockquote><h4>What’s the point of creating artificial superintelligence as a superintelligence species?</h4><p>I’m gonna go on a limb here, to say that the only plausible real use case of building an ASI is to use it as homework or school project for the kids of a superintelligent species, as a small scale real world simulation for show and tell.</p><p>Any other serious effort in that direction, is either:</p><ul><li><strong>Pointless</strong><br>— to superintelligent entities, like reinventing internet is for us</li><li><strong>Futile</strong><br>— to non-superintelligent entities, beating around the bush, throwing lit matchsticks around to momentarily light up a room</li><li><strong>Dangerous</strong><br>— to non-superintelligent entities, setting the room on fire, accidentally opening pandora’s box</li></ul><h3>Do you see the pattern?</h3><p>So if we <strong>aren’t</strong> a superintelligent species already — we shouldn’t build one.</p><p>If we <strong>are</strong> a superintelligent species already — we don’t have to build one.</p><p>So, the only remaining way to attain superintelligence, is where we approach it like an asymptote. Creep up to it, at steady pace, where the product and its developer are in lock-step of mutual progress, in a feedforward configuration — one is never out of sync with the other throughout the process.</p><h3>Closure</h3><p><strong>Update 1: May, 2022</strong></p><p>This section is a new addition during the second edition/overhaul of this article. Quite a while after drafting the first published edition, I found out that on Joe Rogan podcast, Elon Musk had stated something about hybrid intelligence being the path forward, on the very same line. I’m also starting to see a general consensus around this understanding among the people who have given it a thorough consideration &amp; found some clarity on the matter.</p><p>So, gradually over time, I’m hoping this will start to sound more and more practical, and less of an out-there concept. That’s a good thing.</p><p>One day, this idea of hybrid superintelligence may sound so obvious, that the counter/alternative approaches may start seeming like out-there or folly.</p><p>But, for the moment, I’ll cherish the novelty of the concept as still being radical, while wishing upon a short expiry date on that novelty, at the same time.</p><p><strong>Update 2: Jan, 2025</strong></p><p>Someone just called this article “Nevertheless, it’s an interesting perspective with good points to have a discourse on.” which indicates <strong>gradual inclusion of this idea into the Overton window</strong>.</p><p>They also thoroughly complained about my tough-to-read writing style (justifiably so) and made sure I understand the importance of sharing ideas more accessibly. I hope to get better at it — by writing more.</p><p>I’m sure going forward it will seem like a tame, tepid idea that’s embedded into the zeitgeist. The only reason for me to write this section is to remind myself that it always wasn’t the case, and that it’s worth sharing high-frequency thoughts even when they are completely out of phase with the cultural baseline, because the culture eventually does catch up with it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e469b4960042" width="1" height="1" alt=""><hr><p><a href="https://medium.com/debloper/enroute-to-hybrid-superintelligence-e469b4960042">Enroute to Hybrid Superintelligence</a> was originally published in <a href="https://medium.com/debloper">debloper</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>