Even as agricultural land is becoming a covetedinvestment (as manifest in the purchases by billionaires like Stan Kroenke, Bill Gates, and Jeff Bezos, and by institutions like Nuveen and the Canadian Pension Investment Board and by publicly-traded REITs like Farmland Partners and Gladstone Land Corp), there’s another class of investor– with a very different use case– on the hunt. Joy Shin and Ryan Duffy report…
Last year, a datacenter developer started working the phones along Green Hill Road in Silver Spring Township, PA, outside Harrisburg. Mervin Raudabaugh got the call: a mystery buyer wanted to buy his 261 acres of farmland. The developer offered him $60,000 an acre for the land the 86-year-old had farmed for six decades. Mervin turned it down, selling to Lancaster Farmland Trust for <$2M instead, thereby locking the soil into agricultural use. “I was not interested in destroying my farms,” he told a local Fox affiliate.
Two things about this story might have been unthinkable a generation ago: that anyone would offer a farmer nearly $16M for that land, and that it’d be worth more dead (paved over) than alive (producing food).
The Supermarket of the World
For the better part of a century, that’s what America was. From 1959 through 2018, the country ran an agricultural trade surplus every single year, peaking near $27B in 1981, when soybeans, corn, wheat, and rice flowed out of the heartland in volumes that functioned as soft power and hard trade leverage. (When the Soviet harvest failed in 1963, Khrushchev had to buy American wheat through private US grain companies: at market rate, without credit, shipped on American vessels, which was a humiliation leveraged by his enemies to oust him the following year.)
Then, in 2019, the curves crossed. The U.S. has since run a deficit in four of the last six fiscal years. Last year, we imported $43.7B more in agricultural products than we sold.
Washington has started saying the right words. Last month, the USDA and Department of War signed a memorandum designating agriculture as a national security priority. Multiple bills linking food security to national security percolated through the last Congress. If you talk to the right folks in Washington, you’ll hear agriculture now being discussed the way semiconductors were in 2021 — as a sovereign capacity that a serious country cannot offshore.
All of which sounds right, none of which changes what is happening on the ground. Because the ground is the problem.
In real estate, you think in square feet, in proximity, in comps. Farmland trades in acreage, water tables, growing seasons, and soil composition. And right now, profitably farming that acre is just about the hardest it’s ever been.
Since 2020, seed costs have climbed 18%, fertilizer 37%, fuel 32%, and interest on operating loans 73%. Labor is up 50%. These costs never came back down after the 2021-22 supply chain shock, but crop prices did, creating a double squeeze on farmers. Farmland has appreciated nearly four-fold from ~$1,090/acre in 2000 to $4,170 in 2024.
Some 40% of U.S. farmers are over 65. The American Farmland Trust estimates nearly 300M acres will change hands through inheritance in the next two decades. When it does, the math facing each heir will look a lot like Mervin’s. What would you do: keep farming a business with collapsing margins, or if one was offered, take the check?
A Collision of Old & New Economies
Datacenters, chip fabs, and other megaprojects need what farms need: flat land, abundant water, reliable power, and access to transport.
In Loudoun County, VA, ground zero of America’s datacenter buildout, farmland already lists at $55,000–$79,000/acre, a significant premium over the statewide average because markets are pricing in the possibility the land will convert from farmland to computerland.
Conversions are large and getting larger. Meta’s $10B compute cluster in Richland Parish, Louisiana, sits on 2,250 acres of former soybean fields. Samsung’s new $17B fab occupies 1,200 acres outside Taylor, Texas, a town that once called itself the largest inland cotton market in the world. Micron’s $100B megafab is going up on 1,400 acres of former agricultural land and wetlands in Clay, New York. These are some of the largest private investments in American history, and among the most economically and strategically consequential bets we’re making as a country. You can’t help but notice the symbolism of it all: each is being built on rural land that was growing something one or two generations ago.
Datacenter developers, who already need some PR help, have seen local opposition to these projects emerge as a real planning risk, with farming families showing up at county meetings to argue that once the land converts, it will never come back.
Nobody should pretend this is irrational. A fab generates more economic value per acre than any soybean field ever will, the jobs pay better, and the strategic logic of onshoring chips is sound. But the math that makes each individual conversion obvious is the same math that, in the aggregate, leaves you structurally short on food. The country is losing about 2,000 acres a day, with 18M more projected to convert by 2040.
The Flow of Capital
As Washington works to subsidize the farming, to the tune of $10–$15B in federal support each year, Wall Street is betting on the land underneath it leaving farming.
Nuveen Natural Capital, a subsidiary of TIAA, manages $13.1B in farmland across 3M acres globally and recently launched a REIT targeting $3B in new capital. Those holdings have appreciated far beyond what crop income would justify, because it follows the pattern of a conversion optionality play: buy well-located agricultural land at agricultural tax rates and wait for rezoning.
Nearly 95% of American farms are still family-run, but most are modest operations. The 6% of farms generating $1M+ in sales produce 78% of everything, up from 69% just five years ago. Farming has developed the power-law distribution of a winner-take-most industry, except the winners don’t get to set their own prices. The family farm persists in name, but the economics (and economies of scale) increasingly push it to operate like a corporation or exit.
And institutional investors have some strange bedfellows on their side of the orderbook. Foreign investors held an interest in nearly 46M acres as of 2023 – 3.6% of all privately held farmland – up 85% since 2010. Canada alone holds 15M acres. China, which cannot feed its population from its own soil, built COFCO International into a state-backed grain trader that does $38.5B a year and accumulated millions of acres globally. Saudi Arabia was pumping Arizona’s groundwater through Fondomonte, a state-linked operation growing alfalfa for export, until Arizona killed the leases in 2023. Those countries treat productive soil as something worth a sovereign premium, and something you want to physically control…
[The authors recount the history of “Agro-Doomerism” and consider the (largely technological) potential solutions to the conundrum: “This is a hard problem, but it is a solvable one, as shown by the long history of technological revolutions in agriculture. Today, a set of technologies that were each too expensive or immature a decade ago have converged to the point where the raw inputs for a farm, ex land, can get radically cheaper, all at once.” They enumerate some of those potential saviors, and conclude…}
… The long arc of agro-doomerism and technological revolutions say there’s reasons for optimism. Many times before, the “math” said we’d run out of food; many times before, new science, systems, and processes came along that changed the denominator and proved the doomers wrong. Hoping and praying for AGI or another Norman Borlaug [the father of the Green Revolution] to save our bacon is not a strategy, but abundance-oriented technology stacks that don’t force a zero-sum choice between preservation and productivity might be. We should look at systems that help unfallow and uplift acres, making farmland competitive enough that we don’t pave over too much and one day realize we want the topsoil back – or our ag trade deficit erased.
The bet worth making is 1) to never bet against America, of course, and 2) that something similar will happen here: that productivity, not preservation alone, will close the gap. This is a generational opportunity, a category deeply in the national interest, and a sector wanting more capital, technology, engineers, and founders to show up. Those who get there first will be serving a gigantic market, and attacking a problem that Washington has acknowledged is existential but has no idea how to productively solve.
The supermarket of the world was built on cheap land and cheap water. Neither are cheap anymore, and both are being bid up by us – via population growth – as well as the industrial renaissance that we care so deeply about. But that doesn’t mean we can forget foundational inputs – literally – to our way of life…
Farming vs. fabs (and data centers)… American agriculture is caught in a collision between old and new economies: “The Supermarket of the World.”
Is mathematical beauty real? Or is it just a subjective, human ‘wow’ that is becoming redundant in an AI age? Rita Ahmadi explores…
It is a hot July day in London and I take the bus to Bloomsbury. I often come here for the British Library, the British Museum or the London Review Bookshop. More than a location, Bloomsbury feels like stepping into a work of art – maybe one of Virginia Woolf’s stories, or Duncan Grant’s paintings.
This time, I am here for mathematics: the Hardy Lecture at the London Mathematical Society (LMS), named after G H Hardy, a professor of mathematics at the University of Cambridge, a member of the Bloomsbury Group, and a president of the LMS. You may know him from the film The Man Who Knew Infinity (2015), in which he’s played by Jeremy Irons.
The 2025 lecture is by Emily Riehl of Johns Hopkins University in Baltimore, who is talking about a complex mathematical ‘language’ known as infinity category theory: could we teach it to computers so that they could understand it? If successful, computer programs could verify proofs and construct complex structures in this area.
A few seats to my left, I recognise Kevin Buzzard, wearing the multi-coloured, patterned trousers he’s known for among mathematicians. Based at Imperial College London, Buzzard is working on a computer proof assistant called Lean. His interest is personal: after long disputes with a colleague over a flawed proof, he lost trust, as he often puts it, in ‘human mathematicians’. His mission now is to convince all mathematicians to write their proofs in Lean. In the Q&A after one of his talks, he said of the debate between truth and beauty in mathematics: ‘I reject beauty, I want rigour’ – though his vibrant sense of fashion suggests otherwise.
Interest in an AI-driven approach to mathematics has been exponential, and many mathematicians have left traditional academic research to explore its potential. Recently, one group of distinguished mathematicians designed 10 active, research-level questions for AI to tackle. At the time of writing, various AI companies and researchers had claimed to find solutions, which were under evaluation by the community.
Sitting in the room in Bloomsbury, I stared at the Hardy plaque and wondered: would Hardy find proofs generated by AI beautiful? I wasn’t sure. He believed there should be a strong aesthetic judgment in mathematics, drawing parallels with poetry, and argued that beauty is the first test of good mathematics. He went as far as to say that there is no permanent place in the world for ugly mathematics.
If asked, many mathematicians today still talk about the aesthetic appeal of one approach over another.
Yet we live in a different century to Hardy and his Bloomsbury peers, with different technologies and techniques, so perhaps we need a clearer definition of what mathematical beauty actually is. Over the history of mathematics, we can find examples where both rigour and the pursuit of beauty have shaped mathematics itself. So, if we’re completely replacing this with a computer-assisted quest for truth and rigour, we ought to know what we’d be abandoning, if anything. Is mathematical beauty like the beauty in literature and art – or is it something else?…
[Ahmadi explores the idea of “beauty,” generally and in mathematics; traces the rise of AI as a tool, and concludes…]
… my own definition of beauty in mathematics would be as follows:
“Asimplemathematical structure that surprises even the most experienced mathematicians and transfers a sense of vitality.”
But is an AI-assisted proof simple or surprising? How do we define vitality in a machine? On these questions, the jury is out. Myself, I am torn. Maybe models just need more training to match our creativity. But I also wonder whether our limbic system is required. Can we write proofs without emotional kicks? I am also unsure if perfectly efficient brains can come up with novel revolutionary ideas.
Ultimately, this debate is about more than aesthetics; it is closely tied to the development of AI-assisted mathematics. If AI models can produce novel mathematical structures, how should we direct them? Is it a search for beautiful or truthful structures? A question that possibly guides the years to come.
Some mathematicians say they prefer the ‘truth’ and only the ‘truth’. However, my recent discussions with mathematicians showed me that most immediately recognise, enjoy, and even wholeheartedly smile at a beautiful piece of maths. In fact, they spend their whole lives in search of one…
It’s nearly impossible not to be watched these days. It can start right at home with your neighbors and their Ring cameras—a company that sold fear to the American public and is now integrating AI to turn entire neighborhoods into networked, automated surveillance systems.
Head out a bit further and you’ll likely be confronted by Flock’s network of cameras that not only track license plates, but also track people’s movements with detailed precision. And as the Trump administration raids cities across the U.S. for undocumented immigrants, tech giants like Palantir are powering tools for ICE, including one called ELITE that helps the agency pick which neighborhoods to raid.
Understandably, people are worried about violations of their privacy by companies and the government. And many wonder, is there any way to go back once we’ve released all this AI-powered, surveillance tech?…
* “Bentham’s Panopticon [at top] is the architectural figure of this composition. We know the principle on which it was based: at the periphery, an annular building; at the centre, a tower; this tower is pierced with wide windows that open onto the inner side of the ring; the peripheric building is divided into cells, each of which extends the whole width of the building; they have two windows, one on the inside, corresponding to the windows of the tower; the other, on the outside, allows the light to cross the cell from one end to the other. All that is needed, then, is to place a supervisor in a central tower and to shut up in each cell a madman, a patient, a condemned man, a worker or a schoolboy. By the effect of backlighting, one can observe from the tower, standing out precisely against the light, the small captive shadows in the cells of the periphery… He is seen, but he does not see; he is the object of information, never a subject in communication. – Michel Foucault, Discipline and Punish: The Birth of the Prison
###
As we feel seen, we might recall that it was on this date in 2000, that the dot.com bust effectively began. Between 1995 and its peak five days earlier, on March 10, 2000, investments in the Nasdaq Composite stock market index rose from 1,006 to 5,048—a 400% gain fueled by the conviction that the internet would render every prior valuation framework obsolete. It did not.
On March 13, 2000, news that Japan had once again entered a recession triggered a global sell off that disproportionately affected technology stocks. Soon after, Yahoo! and eBay ended merger talks and the Nasdaq fell 2.6%; still, the S&P 500 rose 2.4% as investors shifted from strong performing technology stocks to poor performing established stocks. The market held steady on the 14th. Then, on this date 26 years ago, the broader market begin to drop… and kept dropping. By the end of the stock market downturn of 2002 (the “second chapter” in the correction that began in 2000), stocks had lost $5 trillion in market capitalization since the peak. At its trough on October 9, 2002, the NASDAQ-100 had dropped to 1,114, down 78% from its peak. It took 15 years for the Nasdaq to regain its March, 2000 peak.
Mark Twain (the author of the observation above) was more correct than he may have understood. Alex Wakeman explains that, while most other plants have a single “most useful” element, wild cabbage has many. This makes it perfect for breeding….
Every crop we consume came from a wild ancestor. Through breeding, people selected for bigger grains, juicier fruit, more branches, or shorter stems – gradually turning wild plants into improved yet recognizable versions of their originals. The rare exception is Brassica oleracea, wild cabbage: the origin of cabbage, bok choy, collard greens, broccoli, Brussels sprouts, cauliflower, and much else.
Wild cabbage is unassuming: some untidy leaves and a few thick, coarse stems on the browner side of purple that poke out from the soil. Nothing about it looks appetizing.
Wild cabbage (Brassica oleracea) growing in Northumberland. Source
Nevertheless, many cultures have recognized something special in this plant. By selecting plants with denser layers of leaves, ancient people created modern cabbage and kale. Others bred for the inflorescence, a dense bundle of small flowers that forms the head of cauliflower and broccoli. By favoring large, edible buds, thirteenth-century farmers living around modern day Belgium created Brussels sprouts. Under different selection pressures, Brassica oleracea has become German kohlrabi, or Chinese gai lan, or East African collard greens.
This level of morphological diversity is unusual. Modern tomatoes, for example, vary in size, shape, and color, but are all recognizably tomatoes. Since the 1920s, scientists have worked to understand how Brassica oleracea was domesticated and to deepen our knowledge of evolution and artificial selection.
By combining modern genetics, genomics, and molecular biology with linguistic, historical, and sociological sources, researchers are now beginning to develop conclusive answers…
It feels clear that we’re in the midst of a meaningful cultural/social transition… but what kind of transition? When did it begin? And what might it portend?
Increasingly, folks are turning to the works and thoughts of mid-twentieth century thinkers like Eric Havelock, Walter Ong (who drew on heavily on Havelock’s work), Marshall McLuhan, Joshua Meyrowitz, and others to suggest that we are in the midst of a shift from a literate culture (back) to an oral culture.
The world is full of theories of everything. The smartphone theory of everything argues that our personal devices are responsible for the rise of political polarization, anxiety, depression, and conspiracy theories—not to mention the decline of attention spans, intelligence, happiness, and general comity. The housing theory of everything pins inequality, climate change, obesity, and declining fertility on the West’s inability to build enough homes. If you treat TOEs as literal theories of everything, you will be disappointed to find that they all have holes. I prefer to think of them as exercises in thinking through the ways that single phenomena can have vast and unpredictable second-order effects.
Today’s article and interview are about my new favorite theory of everything. Let’s call it “the orality theory of everything.” This theory emerges from the work of mid-20th-century media theorists, especially Walter Ong and Marshall McLuhan. They argued that the invention of the alphabet and the rise of literacy were perhaps the most important events in human history. These developments shifted communications from an age of orality—in which all information was spoken and all learning was social—to an age of literacy, when writing could fix words in place, allowing people to write alone, read alone, and build abstract thoughts that would have been impossible to memorize. The age of orality was an age of social storytelling and flexible cultural memory. The age of literacy made possible a set of abstract systems of thought—calculus, physics, advanced biology, quantum mechanics—that are the basis of all modern technology. But that’s not all, Ong and his ilk said. Literacy literally restructured our consciousness, and the demise of literate culture—the decline of reading and the rise of social media—is again transforming what it feels like to be a thinking, living person.
The most enthusiastic modern proponent that I know of the orality theory of everything is Bloomberg’s Joe Wiesenthal, the co-host of the Odd Lots podcast… we discussed orality, literacy, and the implications for politics, storytelling, expertise, social relations, and much more…
Some highlights:
Derek Thompson: The return of orality: Why do you think it explains everything?
Joe Weisenthal: I don’t think it explains everything. I think it only explains 99% of everything.
I believe that human communication is becoming more oral. And by that I don’t just mean that people are talking more with their mouths, although I do think that is the case. It’s more that communication in general, whether in the spoken form or in the digital form, has the characteristics of conversation. And it truly harkens back to a time before really the written word or, certainly, mass literacy…
… Thompson: Thinking used to be something that had to be done socially. It was impossible to learn the Odyssey on your own. It was transmitted to you from a person. You would rehearse it with someone else. So the mode of information transfer was necessarily social. Books are written alone and books are typically read alone. And so this age of literacy gave rise to this privilege of solitude and interiority that I think is really, really important.
Walter Ong, our mutual hero, has a great quote that I want to throw to you and then get your reaction to, because it goes right to this point. He said:
Human beings in primary oral cultures do not study. They learn by apprenticeship, hunting with experienced hunters, for example, by discipleship, which is a kind of apprenticeship by listening, by repeating what they hear, by mastering proverbs and ways of combining and recombining them, but not study in the strict sense.
I’m very interested in a phenomenon that I call the antisocial century, the idea that for a variety of reasons, we are spending much more time alone. And that is having a bunch of second and third order effects. And it really is interesting to me as I was going deeper into this project, to think that it’s the age of literacy that in many ways allowed us to be alone as we learned and prize a certain kind of interiority.
Wiesenthal: Marshall McLuhan had this observation: The alphabet is the most detribalizing technology that’s ever existed. It speaks to this idea that prior to the written word, all knowledge was per se communal. It had to be in a group. If you have multiple texts in front of you, then you trust the one that feels most logical. But you don’t have that luxury when all knowledge is communal. Being part of the crowd has to be part of learning.
The ear and the eye are very different organs. You can close your eyes, which you can’t do with your ears. You can get perspective from your eye and establish perspective in a way you can’t do with your ears. So it’s like you go into a room and you can stand back at the corner so you can make sure that you can see everything going on in the room. The ear is very different. We’re at the center of everything constantly. You can’t close it. The ear continues to work while we’re sleeping. There’s an evolutionary purpose for the fact that we can still hear when we’re sleeping, because if there’s an intruder or a wild animal or something, it wakes us up and we can run.
So the ear, McLuhan said, is inherently a source of terror. It feels very digital. Even though we do look at the internet, there is this sense in which we can never remove ourselves from it. Even if we’re reading the internet, it almost feels more like we’re hearing it. There’s an immersiveness in contemporary digital discourse that I think is much more like hearing than it is about seeing. So I think there’s all kinds of different ways that we are sort of returning to this realm….
… Thompson: I want to apply your theories to some domains of modern life, starting with politics. You mentioned Donald Trump, and I went to look up Donald Trump’s nicknames, because I know that you’re very interested in his propensity for epithets, for nicknames. It’s nearly Homeric. And so fortunately for our purposes, Wikipedia keeps track of all of Donald Trump’s nicknames, so I didn’t have to remember them—speaking of outsourced memory. Here’s some of them. Steve Bannon was Sloppy Steve, Joe Biden was Sleepy Joe, Mike Bloomberg was Mini Mike, Jeb Bush, of course, Low Energy Jeb, Crooked Hillary, Lyin’ James Comey, Ron DeSanctimonious, DeSantis. I think that one might’ve gotten away from him.
Weisenthal: That was late Trump, he didn’t have his fastball anymore.
Thompson: This plays into this classic tradition of orality. Right? The wine-dark seas, swift-footed Achilles. And Walter Ong has a great passage where he writes about this, that I would love to get your reaction to:
”The cliches in political denunciations in many low-technology developing cultures, enemy of the people, capitalist warmongers, that strike high literates as mindless are residual formulary essentials of oral thought processes.”
Basically, it’s so interesting to think that Ong is saying that it is low-technology developing countries where these nicknames are prevalent. But you wake up today and thee richest country in the world is presided over by a now two-time president whose facility for nicknames is very famous. I wonder what significance do you put on this? Why is it important that a figure like Trump plays into these old-fashioned oral traditions?
Wiesenthal: It’s interesting when you say things like, “Oh, Trump has a sort of Homeric quality the way he speaks,” that repels a lot of people. Like, “What are you talking about? This is nothing like Homer.” But my theory, which I can’t prove. The original bards who composed Homer were probably Trump-like characters. So rather than seeing Trump as a Homeric character, what’s probably, what I’m almost certain is the case, is that the people who gathered around and told these ancient stories were probably Trump-like characters of their time. Colorful, very big characters, people who were loud, who could really get attention, who would captivate people when they talked. One of Ong’s observations in Orality and Literacy is about heavy and light characters in oral societies. Heavy characters, it’s like the Cerberus, like the three-headed dog, the Medusa, the Zeus. These just larger-than-life, frequently grotesque, visually grotesque characters.
I think if you look at the modern world, the modern world has elevated a lot of what I think Ong would call heavy characters. I certainly think Trump is a heavy character, with his makeup, and his hair, and his whole visual presentation. I think Elon is a heavy character. I think if you look at the visual way that a lot of sort of YouTube stars look with their ridiculous open-mouthed soy faces when on their YouTube screenshot. I think they sort of present themselves, not in a way that we would think of as conventionally good-looking. Right? Not in a way that’s conventionally attractive, but this sort of grotesque visual that just sticks in your head. And that that is clearly what works. We are in the time of the heavy character…
… Meyrowitz in 1985 was talking about electronic media before anyone really conceived of that idea. One of his observations is that everybody has a front stage and a backstage. We talk on this podcast in a certain way. But that is different than how we would talk at home with our family. Or you and I might talk differently when we hang up this podcast and we’re saying goodbye or something. This is a very normal thing, which is that you just talk differently in different environments and so forth.
What Meyrowitz anticipated in No Sense of Place is this idea that electronic media would cause us to come to be suspect of people who talk differently in one environment vs. another. If someone code-switched, if someone talked differently on the campaign trail than they did in their private life, that we would come to think, ”Oh, this person’s a phony.” He predicted that by allowing everyone to see all the facets of these characters, we would view them differently.
Thinking about a politician, something about Trump is that there’s very few examples of him ever talking differently than any other environment. People could be totally repelled by things that he said in public or private. But he’s not a hypocrite in the way that a lot of people use that word. He is the same in almost every environment. This is precisely what Meyrowitz would’ve anticipated, that we would gravitate toward people who when we saw their front stage and their backstage, where the concept of place was completely disintegrated from the idea of character, that we would come to view that consistency of character as a value.
Thompson: The name of Meyrowitz’s book is No Sense of Place. And I want to just slow down on that title, because it’s a pun. It’s not a very intuitive pun, but it’s a really, really smart pun. By No Sense of Place, Meyrowitz is saying that electronic media extends our consciousness outward, so we don’t really know where we are. I could be reading Twitter in Arlington, Virginia, but feel myself becoming emotional about Gaza or Ukraine, or Minneapolis, in a way that was impossible in the age before television or radio. This new age of communications media takes us out of where we are and puts us right in front of the faces of people that are thousands of miles away.
But he also means no sense of place in a hierarchical sense. He means that people will be able, with electronic media, to operate outside of their slot in the hierarchy: the poor will be able to scream at the billionaires; the disenfranchised will be able to scream at those who disenfranchise them. And this he said is going to create more social unrest. It’s going to create more, I think what he would agree now is something like populism. And this really interesting idea that electronic media not only unmoors us from where we are geographically, but that it also demolishes hierarchies, I think it was incredibly insightful, considering it was written 41 years ago.
But he goes one step further in a way that’s really surprising… He says this about our future relationship to expertise. And God only knows how many people have talked about what’s happened to expertise in the last few decades. Meyrowitz:
Our increasingly complex technological and social world has made us rely more and more heavily on expert information, but the general exposure of experts as fallible human beings has lessened our faith in them as people. The change in our image of leaders and experts leaves us with,” and this is exactly your point, “a distrust of power, but also with a seemingly powerless dependence on those in whom we have little trust…
As we contemplate culture, we might note that it was on this date in 2012 that Encyclopædia Britannica’s president, Jorge Cauz, announced that it would not produce any new print editions and that 2010’s 15th edition would be the last. The first (printed) edition of the Encyclopedia Britannica was published between 1768 and 1771 in Edinburgh as the “Encyclopedia Britannica, or, A Dictionary of Arts and Sciences compiled upon a New Plan.” Since 2012, the company has focused only on an online edition and other educational tools. It goes by simply “Britannica” now.
You must be logged in to post a comment.