Hey, Mom! The Explanation.

Here's the permanent dedicated link to my first Hey, Mom! post and the explanation of the feature it contains.

Friday, March 31, 2023

A Sense of Doubt blog post #2964 - Sentiers Jan - March 2023



A Sense of Doubt blog post #2964 - Sentiers Jan-March 2023

Here's my very lazy share today (not just because I am at a conference but because I am busy) in which I post together a bunch of the Sentiers newsletters from this year so far. It's good reading. Lots of links.

Thanks for tuning in.


SENTIERS

No.248

JAN 15, 2023 




THIS WEEK → How to think about what’s possible for tomorrow ⊗ AI hybrids ⊗ Why not Mars ⊗ Everything is deeply intertwingled ⊗ Two ways to think about decline

A YEAR AGO → A favourite in issue No.202 was Oh, 2022! by Charlie Stross.



Mars Christmas craterscape, perspective view along icy layered ridge. (ESA)



Rose Eveleth has spent the past eight years making over 180 episodes of a fantastic podcast about the future called Flash Forward, which I’ve mentioned here a few times in the past. Lots and lots of great episodes. The link above is to the first part of a trio of articles Eveleth wrote for WIRED in which they reflect on futures that haven’t happened yet on how “we do, in fact, get a say, and we should seize that voice as much as we possibly can.” The articles focus in turn on “hopewashing,” how to live on the precipice of tomorrow, and the many metaphors of metamorphosis, exploring who influences the futures we consider possible, how we are good (or not) at predicting the future and understanding the importance of events, how to change instead of burning things down, and how “hope should be a place to start, not a feeling to marinate in. Not a warm bed, but the alarm that gets you out of it.”

I often write and link to articles about the importance of imagining better futures, which usually focus on how to write and invent them, this series by Eveleth is good context, background, and inspiration for these reflections and inventions, as well as a great excuse to dive into the archives of the podcast.

[M]uch like we cannot let the work of building better futures be contingent on feeling hopeful, we can’t let corporations or those in power control the flow and definition of hope either. No company or politician can hand you hope. We have to build it in and among ourselves as a beginning, not as an end. […]

How does one change the future? How do we get to the tomorrows we want and not the ones we don’t? And a core piece of that question has to do with the way in which insects melt themselves into goo. Must we fully dissolve ourselves and our world in order to get to the futures we want? Do we have to burn it all down, destroy it all, and rebuild from that melted space? Or can we change more gradually, more incrementally, more like the hermit crabs, upgrading slowly as we go? […]

As Octavia Butler once said, “There’s no single answer that will solve all our future problems. There’s no magic bullet. Instead, there are thousands of answers—at least. You can be one of them if you choose to be.”

AI Hybrids  See Note

I barely listen to podcasts so it’s a bit weird to start the new year with two recommendations based on mainstays of my thin podcast diet. Ezra Klein interviewed Gary Marcus for a skeptical take on the AI revolution. It’s an excellent and wide ranging discussion, and I was especially drawn to the part on the “war” between the neural network and symbolic camps of AI research and development. Marcus argues for a more balanced financing of both, and for hybrids of the two approaches, for mixed solutions with distinct tools that work together.

And there’s this weird argument, weird discourse where people who like the neural network stuff mostly don’t want to use symbols. What I’ve been arguing for 30 years, since I did my dissertation with Steve Pinker at M.I.T. studying children’s language, has been for some kind of hybrid where we use neural networks for the things they’re good at, and use the symbol stuff for the things they’re good at, and try to find ways to bridge these two traditions.

The day before listening to that interview, I was reading Stephen Wolfram’s essay proposing Wolfram|Alpha as the way to bring computational knowledge superpowers to ChatGPT. Here’s the basic ‘thesis’ of the essay:

For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making Wolfram|Alpha understand natural language—there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.

Remarkably similar to Marcus’, no? Wolfram goes on to give multiple examples of where ChatGPT gives great looking answers that are nonetheless complete bullshit, and then comparing that to Alpha answers based on application of the Wolfram computational language. His argument perfectly matches my experience so far with the chat AI, which I’ve written about before: it’s remarkably adept at writing human-like replies, but doesn’t understand what it’s saying as much as the quality of the writing might lead us to believe.

Are the next steps a question of scaling evermore and perfecting the models? Or is it a questions of mixing statistical and symbolic tools? I’m definitely not in a position to give a strong argument for either, but Marcus and Wolfram make fascinating arguments for the latter.

Related → I was mostly off during the holidays but did some feed reading and noted a lot of articles about AI. I recommend Maggie Appleton’s The Expanding Dark Forest and Generative AI where she intersects the transition to smaller social networks with the coming flood of generated “content” that might make parts of the internet virtually uninhabitable. Yet to read but promising: The Fine Art of Promptingby Jon Evans ⊗ Enjoy Chatbots While They’re Free by David Karpf ⊗ An A.I. Pioneer on What We Should Really Fear ⊗ AI experts are increasingly afraid of what they’re creating.

Why Not Mars  See Note

The excellent Maciej Cegłowski with a long, very annotated, and quite compelling essay (first in a series it seems) to persuade us “that we shouldn’t send human beings to Mars, at least not anytime soon.” I was already convinced, and I’m sure many readers here are, but it remains a great read for all the science bits integrated in his argument. The section on bacteria is especially enlightening and proves, once again, how much we have yet to earn about our own planet.

Sticking a flag in the Martian dust would cost something north of half a trillion dollars, with no realistic prospect of landing before 2050. To borrow a quote from John Young, keeping such a program funded through fifteen consecutive Congresses would require a series “of continuous miracles, interspersed with acts of God”. Like the Space Shuttle and Space Station before it, the Mars program would exist in a state of permanent redesign by budget committee until any logic or sense in the original proposal had been wrung out of it. […]

These new techniques confirmed that earth’s crust is inhabited to a depth of kilometers by a ‘deep biosphere’ of slow-living microbes nourished by geochemical processes and radioactive decay. […]

One path forward would be to build on the technological revolution of the past fifty years and go explore the hell out of space with robots. This future is available to us right now. Simply redirecting the $11.6 billion budget for human space flight would be enough to staff up the Jet Propulsion Laboratory and go from launching one major project per decade to multiple planetary probes and telescopes a year. It would be the start of the greatest era of discovery in history.

Everything Is Deeply Intertwingled  See Note

Gemma Copeland on intertwingled thinking, by way of Ted Nelson, backlinks, digital gardens, Claire L. Evans and Christopher Alexander (citing a piece I’ve previously written about), Jenny Odell, Ursula Le Guin, and many other people familiar to readers of this newsletter. I’m including it here for it’s own sake, but also because of my own questions on backlinks.

The redesign of the Sentiers archives a while back was supposed to deconstruct all the issues into a digital garden with backlinks. The first part worked out great but I never managed to get into the habit of short notes with lots of backlinking or found the reflex/time to write additional notes that don’t appear in the newsletter. I’ve never gotten much feedback or proof of use of this archive, which makes me wonder about it’s usefulness to readers. And so, I’d love to hear from you, do you/have you used the archive, tags, etc. Or perhaps you just use the web version of each issue? Or nothing at all? Please hit reply and help me decide on the way forward for the archive format and digital gardening.

I think a digital garden full of bidirectional links is a kind of semilattice. The content can be collected, remixed and resurfaced in many different ways, appearing in lots of different sets according to the context. Working in this way requires a whole different approach to design. It’s complex and nonlinear, which can be challenging to get your head around compared to a tree website. Instead you have to understand it from the bottom-up, thinking in sets or patterns instead of trying to establish a top-down map or plan. […]

The task of hypertext is not to manufacture connections, but to discover where they have always been. Hypertext researchers before the World Wide Web built systems to support this endless, sacred hunt for entanglement and hidden structure, as inherent to thought as ecosystems are to the natural world.

Two Ways to Think About Decline  See Note

Big tech has been going through some turmoil, especially around companies’ valuation but also business models, how much employees they need, and what’s next for each of them. This issue of Tim Carmody’s Amazon Chronicles paints a very useful portrait of the ongoing transition of these companies.

In general, what characterizes this phase of the tech giants' development is a shift from unlocking user creativity and customer value to doubling down on surveillance, usually augmented by AI. Mass surveillance was always an important emergent part of the tech giants’ strategy, but was arguably secondary to delighting users and giving them greater capabilities. Now surveillance and nonhuman solutions are dominant, and the creative possibilities are now almost all residual. […]

Instead of accelerating growth, we're seeing accelerated attempts to manage or ward off decline, where decline is much more narrowly construed as a loss of profits and revenue, rather than market share, user relevance, or technological innovation.

FUTURES, FORESIGHTS, FORECASTS & FABULATIONS → Love this new initiative led by Superflux! Cascade inquiry “imagines future worlds where positive climate action has been taken.” ⊗ Excellent design fiction playlist. ⊗ Sci-fi author Judith Merril and the very real story of Toronto’s Spaced Out Library ⊗ Dreams of a Resilient Planet “puts forward three new punk genres to help think of a better tomorrow, following traditions of classics like Cyberpunk, and Solarpunk.”

Asides  See Note

SHARE →If you enjoyed this issue, please consider sharing it by email or on social media. Here’s the link. Thanks!



No.252

FEB 12, 2023 


THIS WEEK → Creatures that Don’t Conform ⊗ We’ve Always Been Distracted, or at Least Worried that We Are ⊗ Dark Futures ⊗ People, Processes and Tools. Assemblage ⊗ ChatGPT will destabilize white-collar work ⊗ A more flexible approach to machine learning

This is an issue of the free weekly Sentiers newsletter. If it was forwarded to you, subscribe here. The newsletter is supported by paid members and supporters.

Creatures that don’t conform

Sometimes when I share an article I have something to add and sometimes I don’t, it’s just me waving my arms and pointing; this is awesome! Look! Wonderful things we don’t understand! Who the hell are we to ignore these things and plow them under? We know so little yet think so much of ourselves! Look! Why are we so square, so binary, so limited in what we value? (Yes, using ‘we’ is especially wrong here.)

Lucy Jones (who’s great reading of the piece you can listen to at Emergence Magazine) not only presents a superb overview of slime molds, she also covers all the questions and feelings before I could write them here. Awe, mixed in with ‘we don’t know everything, far from it,’ mixed in with what else can we learn from nature and is it even more useful in humbling us than in what we can actually learn from it? See much of what’s tagged with fungi or trees or even sublime for similar feelings of awe.

Myxomycetes are currently placed in the kingdom Protista: Enigmatic creatures that can’t be placed easily in boxes. Creatures that don’t conform. Creatures that defy our understandings of the world. Creatures that spill ooze through constructed human boundaries. Creatures that are at once individual and then collective. […]

Slime molds have things to teach us. That a being can change but at the same time remain itselves—to use Octavia Butler’s phrase. That there is life and beauty in rot, in decay, in decomposition, in the ashes. That a hallmark of life is evanescence and ephemerality. That our limited, Romantic understanding of the world—“ew, slime”—is outdated. That nonhierarchical, nonbinary being is part of the reality of the world […]

They make us face the facts: that nothing lasts forever. That ultimate human control is illusory. That we might be at the top by force, but we are not at the center. But I think this is why we need to know them. Our rational, materialistic worldview obscures transcendence and awe. Our culture of forgetting, rejecting, ignoring the wider world requires some work, some assistance, to undo. […]

Perhaps they can help dismantle our delusions of human exceptionalism—with their absurd hidden ethereal beauty. They can dissolve the boundaries we pretend exist—with their remarkable metamorphoses. They can challenge our stagnant cultural notions—with their existence as both collective and individual. They can humble us—with their complexity which is beyond our understanding. […]

They make us face the facts: that nothing lasts forever. That ultimate human control is illusory. That we might be at the top by force, but we are not at the center. But I think this is why we need to know them. Our rational, materialistic worldview obscures transcendence and awe. Our culture of forgetting, rejecting, ignoring the wider world requires some work, some assistance, to undo.

We’ve always been distracted, or at least worried that we are

At Aeon, Joe Stadolnik looks back on some of the periods in history where various thinkers were worried about some new thing that would destroy the brain, the concentration, the attention, of its users. Seneca the Younger, Chinese philosopher Zhu Xi, Renaissance scholar Erasmus, and French theologian Jean Calvin, to name a few.

It’s a topic, and even an approach to an article, that has been used before, but I’m sharing this one because of the side trip to “lines of thought,” an 13th-century diagram, which the intellectual historian Ayelet Even-Ezra takes us on, and because of something I started wondering about while reading.

If all those stages and technologies did change our brains, rob us of something or took away some way of thinking, what was it like to think back then? Was it really that different? If we could somehow experience Seneca’s or Zhu Xi’s brain, would we feel the difference? I’m tempted to say it would be the same, but I also know my book reading has devolved in recent years, so who knows?

[F]or Even-Ezra, these horizontal trees written by medieval scribes did not simply record information – they recorded pathways for thinking that were enabled by the branching form of the tree itself. Branching diagrams reveal the medieval extended mind at work in its interactions with pen, ink and the blank space of the page. […]

Even if the new multitudes of books, and the indexes mapping them, caused some alarm among those who witnessed their proliferation and the demise of careful and attentive reading, we raise no alarms in retrospect. New regimes of memory and attention replace the old ones. Eventually they become the old regimes and are replaced, then longed for. […]

It is remarkable how two different eras could both say something like: ‘We live in a distracted world, almost certainly the most distracted world in human history,’ and then come to exactly opposite conclusions about what that means, and what one should do.

Dark futures

It’s likely more a personal quirk than an actual valid distinction, but when ‘futures’ slide too much towards the ‘trend hunting,’ I usually drop off. That often means that when fashion and branding come in, I lose interest. Not that either of those are completely uninteresting or without value, just that when included in futures it often means we’re sliding into a very close future and in the realm of strategy. Your mileage may vary.

All of this to say that this piece looking at dark futures, dark optimism, dark ecology, and dark euphoria is, by and large, really worth a read, especially dark optimism, as it seems like a worthy approach to mixing a pragmatic view of the situation and an optimistic outlook on possibilities. The authors, the 2sight foresight studio, cite a number of research papers and stats from reports to build their descriptions and attach a lot of signals worth looking into.

This is exactly what it is about: hiding brighter futures, futures’ potentials and doom-induced feelings by staying in the dark. What if darkness was instead revealed to foster more realistic, protopian, euphoria-like paths forward? A nyctophilia-inspired approach through Dark OptimismDark Ecology and Dark Euphoria, allowing us to “be comfortable in the dark”, yet not blind. […]

This is when “doom washing” happens: identified by MØRNING consultancy, the term describes a situation in which brands capitalise on apocalyptic and dystopian aesthetics, solely to gain empathy from consumers during difficult times when they might struggle with recession, anxiety or living conditions — in short, when consumers experience doom — without taking any tangible situation to help. […]

Dark Optimists, as a growing culture, recognise how far we might be from solving the issues at stake, but still nurture their mindset with an enlightened yet grounded optimism.


USING ACTOR NETWORK THEORY TO RETHINK WORK IN THE AGE OF GENERATIVE AI → “[B]oth of these perspectives miss the fact that outputs arise from the interaction of human and machine, and that we change one another in the process. The real actor is hybrid. Probably in part because I read it the same day, but not only, I feel there’s some overlap with this definition of assemblage“Problems are remade — they are transformed. As the assemblage is modified the components are also changed and the field of potential outcomes is changed.”

ALGORITHMS, AUTOMATION, AUGMENTATION → First, I’ll mention that this heading, a play on my regular “Futures, foresights, forecasts & fabulations,” was prompted on ChatGTP. All words I use, so I could have come up with it I’m sure, but it literally took 20 seconds, and that includes login. ⊗ How ChatGPT Will Destabilize White-Collar Work“It creates content out of what is already out there, with no authority, no understanding, no ability to correct itself, no way to identify genuinely new or interesting ideas. That implies that AI might make original journalism more valuable and investigative journalists more productive, while creating an enormous profusion of simpler content.” ⊗ Researchers Discover a More Flexible Approach to Machine Learning ⊗ Matt Webb build a new thing, by centauring with ChatGPT. Browse the BBC In Our Time archive by Dewey decimal code. ⊗ Fascinating. How does AI see your country? Let’s take a Midjourney around the world. ⊗ How AI is de-ageing stars on screen.

Asides

  • 🤩 🦑 🇨🇦 This is very much my jam. Squid-inspired smart windows could slash building energy use“Krill and squid mechanically move pigments in their skin to actively change their skin’s appearance. The device emulates that by moving various liquids—dye solutions, glycerol, and carbon powder suspensions—through channels carved into thin plastic sheets.”
  • 🤣 🎥 👾 HBO’s Gritty Prestige TV Adaptation of Mario Kart“If you’ve ever wondered what HBO and the producers of The Last of Us might do with some slightly different source material, Pedro Pascal and the cast of Saturday Night Live took a crack at a gritty adaptation of Mario Kart. I mean, I would 100% watch this.”
  • 🤩 📧 🖼 JPEG · No Text, Just Images. I wish I’d thought of this. “Curated streams on contemporary culture. Image essays. Subject matter varies. No text. Just images.” (Via Naive Weekly.)
  • 👏🏼🌊 🇨🇦Canada is set to make a massive protected area official — and it’s underwater“The Tang.ɢwan-ḥačxʷiqak-Tsig̱is marine protected area will be 133,000 square kilometres, covering underwater mountain ranges and alien ecosystems.”
  • 👏🏼 Common Wealth“From the climate crisis to concentrated corporate power, ownership shapes how our economy operates and in whose interest. Only by reimagining it can we build an economy that's democratic and sustainable by design.”
  • 🤔 🍄 🩺 MDMA and Psilocybin Are Approved as Medicines for the First Time“Many are celebrating Australia’s decision to pave the way for these psychedelic therapies, but questions around accessibility remain.”

SHARE →If you enjoyed this issue, please consider sharing it by email or on social media. Here’s the link. Thanks!


No.254

MAR 05, 2023 





THIS WEEK → Multiplayer Futures ⊗ What Has Feelings? ⊗ How I Research a New Subject ⊗ A Legendary World-Builder on Multiverses, Revolution and the ‘Souls’ of Cities ⊗ Calibrating experiences of the future ⊗ Learning together for responsible artificial intelligence

This is an issue of the free weekly Sentiers newsletter. If it was forwarded to you, subscribe here. The newsletter is supported by paid members and supporters.

Multiplayer futures

RADAR published a whitepaper this week, on their vision of multiplayer futures, an “imagination infrastructure” that the group can use to detect, develop, share, and try to reify futures through their processes and structures. The whole paper (see it as a long essay if you have a habit of skipping over ‘papers,’ like I often do) is heavily linked and quotes a lot of people (including yours truly) to assemble this vision, well worth a read if you are interested in topics like small groups, multiplayer collaboration, futures, emergence, Berkana Institute’s two loops, and a lot more besides.

I participate a little in RADAR but mostly lurk, and I must regularly look like this when I try to catchup to the absolute flood of great stuff constantly pouring into the Discord. So it’s with admiration for the general work that I say that I wholeheartedly agree with the conceptual vision and largely agree with 80% of the piece, but that I still look quite sideways at the “multiplayer infrastructure for Incubate” and the exit to community. I’m skeptical (or perhaps I lack imagination) at what kind of research result could plausibly be manifested by a group emerging from the process, and at how the whole group and its projects could be sustained through NFTs and crypto. I’m not saying it’s impossible, but I’d tend to separate the two more and have non-crypto financing options. Regardless, great read.

[W]e’re placing our stake in the ground and putting forth a new theory of change. One that relies on interconnected emergence rather than individual innovation; one that believes mass adoption can occur much more rapidly under these circumstances; one that’s supercharged by new behaviors & new technology. […]

Recent discoveries and conversations across scientific spheres all confirm what Alan Watts knew: the world is indeed wiggly. Chaos, complexity, circumstances outside our control; interconnected, interdependent, deeply and uncomfortably unpredictable. […]

If we’re going to accelerate emerging futures, we’re going to do it in multiplayer mode — creating the conditions for pathfinders and groundbreakers to come together, learn together, and play together; to survive and thrive in a wiggly world. […]

A community of thinkers, makers, pathfinders, and groundbreakers aligned behind a shared vision of a better future. Their combined skills, talents, wisdom, and capacity aimed at creating not just new products or services, but new categories, new lifestyles, new worlds. Decentralized, distributed, and collective decision making determining where they’ll point their energy. A collective working at the level of a shared story, writing the next chapter together. […]

“Everything we attempt, everything we do, is either growing up as its roots go deeper, or it’s decomposing, leaving its lessons in the soil for the next attempt.”

What has feelings?

“As the power of AI grows, we need to have evidence of its sentience. That is why we must return to the minds of animals.” Fascinating read at Aeon, by Kristin Andrews, professor of philosophy at York University in Toronto and York Research Chair in Animal Minds (wow!), and Jonathan Birchis, associate professor in philosophy at the London School of Economics and Political Science. I don’t always list all authors and their titles but it’s very appropriate here to get an idea of the credibility behind their thinking as they explain with great insight and examples how the search for sentient species, their evolutionary background, the gaming problem, the ‘N = 1 problem’, and how the various ways of interpreting and testing for said sentience can inform our evaluation of AIs.

Even with all the great scientific bits in there, they are still ‘just’ basing all of it on their current way of defining sentience; through a specific set of markers that has a lot to do with feelings of pain and reactions to it. It might be the best way, but it’s likely only one of a few, which goes to show how complex this whole debate/research is.

The situation resembles that faced by researchers studying the origins of life, as well as researchers searching for life on other worlds. They are in a bind because, for all its diversity, we have only one confirmed instance of the evolution of life to work with. So researchers find themselves asking: which features of life on Earth are dispensable and contingent aspects of terrestrial life, and which features are indispensable and essential to all life? Is DNA needed? Metabolism? Reproduction? How are we supposed to tell? […]

It could also be that sentience has evolved only three times: once in the arthropods (including crustaceans and insects), once in the cephalopods (including octopuses) and once in the vertebrates. And we cannot entirely rule out the possibility that the last common ancestor of humans, bees and octopuses, which was a tiny worm-like creature that lived more than 500 million years ago, was itself sentient – and that therefore sentience has evolved only once on Earth. […]

With animals, there is no reason to worry about gaming. Octopuses and crabs are not using human-generated training data to mimic the behaviours we find persuasive. They have not been engineered to perform like a human. Indeed, we sometimes face a mirror-image problem: it can be very difficult to notice markers of sentience in animals quite unlike us.

ALGORITHMS, AUTOMATION, AUGMENTATION → You are not a parrot, and a chatbot is not a human. On Emily Bender and the “octopus paper.” I haven’t read it yet but Jason has and he shares some salient points and his own useful take on ‘self-driving’ cars. ⊗ Learning together for responsible artificial intelligence. The Government of Canada’s “report of the Public Awareness Working Group.” ⊗ How WIRED will use Generative AI tools. Good policy.

How I research a new subject

I was initially reading this article by Clive Thompson for personal use or perhaps as a ‘short’ for the newsletter but, from what I gather, there are enough readers who do some form of work that integrates something that could qualify as research that I think this can be useful to a lot of people. Thompson says that “there’s no magic bullet. It’s mostly about sheer, dogged persistence” and the piece is centered on his work as a journalist but there are good tips in there for many of us. (And if you love beautiful libraries like I do, you need to click through just for the top picture.)

Point five, “follow up on everything” is especially useful for me, as it’s a variation on something I always struggle with; when is enough flânage enough? Just this Thursday, I spent 10 minutes in my newsletters folder when I wasn’t supposed to have the time. I found four of the links from this newsletter. ‘Enough’ was clearly not the moment just before those 10 minutes.

So I’ve learned to tolerate a lot of digressions in my research. I’ll look at the footnotes in a paper and read the ones that seem intriguing. I’ll be interviewing a subject and they’ll mention that their thesis supervisor was obsessed with [Topic G] Only Slightly Related To The Subject At Hand] and I’ll spend a few hours reading up on that. […]

It’s also “bursty”. I’ll spend days or weeks slogging away at a subject, learning things generally but not necessarily finding the perfect anecdotes, data points, stories and expert ideas that help me build my article. Then suddenly, bang, out of nowhere, four or five crucial elements will fall into place in a single afternoon. Then it’s back to slogging for days and days or weeks and weeks until I hit pay dirt again. […]

Saturation is the moment when you’re doing research and you feel like you’re no longer encountering novel information. You read an article or a white paper and think, huh, yeah, I already know all this stuff. Someone mentions a major expert in a field and you go, yep, already read their work.

FUTURES, FORESIGHTS, FORECASTS & FABULATIONS → A legendary worldbuilder on multiverses, revolution and the ‘souls’ of cities. Ezra Klein Interviews N.K. Jemisin and basically goes through a one-on-one worldbuilding workshop, lucky bastard. ⊗ What I’m looking for in the WIRED back catalog. Dave Karpf on his project about the magazine, interesting insights into his process, thinking, and it’s kind of critical futures. ⊗ Calibrating experiences of the future. Scott Smith of Changeist on some of their design fiction work. ⊗ Learning futures studies collaboratively. ⊗ The Dao of foresight by Alex Fergnani.

Asides

  • Apply to join the upcoming cohort of Nervous System Mastery — learn evidence-backed protocols for cultivating calm, increase your capacity for focus and improve sleep. Limited seats available. Sponsored
  • 😍 📚 🇯🇵 Readers Burrow into a Bookworm Haven in Kurkku Fields’ ‘Underground Library’“Undulating grass mounds at Kurkku Fields camouflage a meditative enclave for reading and rest. Opened last month in Kisarazu City, Japan, “Underground Library” is the project of Hiroshi Nakamura and NAP Architects, who designed the study center so that it nestles into the ground and seamlessly merges with the surrounding landscape.”
  • 😃 🖼 🇳🇱 Great idea! The Mauritshuis Museum Is Showing Remixes of Girl With a Pearl Earring in Her Absence“The Mauritshuis museum has loaned out Girl With a Pearl Earring to the Rijksmuseum for its blockbuster, once-in-a-lifetime Johannes Vermeer exhibition. While she’s out of the building, they’re digitally displaying dozens of renditions of the artwork submitted during an open call for entries last year.”
  • 😍 🍇 🇮🇨 I’d never heard of these vineyards, incredible! The Protected Landscape of La Geria“How can an island as dry as Lanzarote produce excellent white wines and sweet wines? The answer is the “geria”, a cone-shaped hollow excavated in natural layers of volcanic gravel several metres deep and in the centre of which a vine is planted, a wall in the shape of a half moon is then built around the vine in order to protect it from the wind. Row after row of these perfect hollows which are tinted green, ochre and black result in a most unique landscape and it helps to justify why Lanzarote has been included in The Unesco World Network of Biosphere Reserves.”
  • 🎥 🍿 A Movie Trailer Editor Deconstructs Iconic Trailers“Bill Neil is a movie trailer editor at Buddha Jones and in this video he guides us through a short history of movie trailers — from Dr. Strangelove in the 60s to Neil’s own Nope trailer — and gently picks them apart to show us how they work.”
  • 😍 📻 Lovely! Instantly Explore the World with the Clever and Simple CityRadio“We love how damn simple and straightforward the CityRadio is. Comprised of a simple player with modular city buttons, the device lets you instantly tune into radio stations in your city of choice. Samba in Sao Paulo? Business talk in Beijing? Talk radio in Tokyo? It’s just a click away.”
  • 🍄 💻 A look inside the lab building mushroom computers“With fungal computers, mycelium—the branching, web-like root structure of the fungus—acts as conductors as well as the electronic components of a computer. (Remember, mushrooms are only the fruiting body of the fungus.) They can receive and send electric signals, as well as retain memory.”

SHARE →If you enjoyed this issue, please consider sharing it by email or on social media. Here’s the link. Thanks!



No.255

MAR 12, 2023


THIS WEEK → Designing an Economy Like an Ecologist ⊗ A Society that Can’t Get Enough of Work ⊗ The Imminent Danger of AI Is One We’re Not Talking About ⊗ Speak no evil ⊗ Curation vs. Consumption ⊗ Pirates and farmers ⊗ Alternate Histories and the Real World

This is an issue of the free weekly Sentiers newsletter. If it was forwarded to you, subscribe here. The newsletter is supported by paid members and supporters.

Designing an economy like an ecologist

Kasey Klimes on the shortcomings of the neoclassical economic model and how we’d be well served to finally accept that it’s an oversimplified model, embrace the complexity, and learn lessons from ecology. I’m going to have to read some economics books (😨) to make up my own mind but there seems to be a growing chorus of people highlighting the feebleness of the underlying model. To a novice, it basically reads like economics is a thought experiment that went viral and became reality. Just three items from that model (quoting Klimes): “Everyone has perfect information at all times. People always act rationally and logically. Everyone has access to at least some amount of every possible good and service.” I mean, that’s fine if you want to run an experiment, I guess, but seen as an accurate description of reality, one you can build policy from? Mind boggling.

Klimes goes on to show a few key indicators to assess the general health of an ecosystem and the economic equivalents he proposes. Considering the ‘biodiversity’ of businesses, a “high market concentration suggests problems.” Or “in economics, we might consider the velocity of money. If capital is accumulating rather than cycling through the economy, imbalances may emerge.” He suggests empirical research into “potential interventions within the context of economic complexity” and small experiments, observation, and scaling successes.

The whole essay introduces the Library of Economic Possibility which he and Oshan Jarow recently made public.

“There is one striking empirical fact about this whole literature, and that is that there is not one single empirical fact in it. The entire neoclassical theory of consumer behavior has been derived in ‘armchair philosopher’ mode, with an economist constructing a model of a hypothetical rational consumer in his head, and then deriving rules about how that hypothetical consumer must behave. […]

Ecologists developed these interventions through an empirical understanding of how ecologies work in the real world. They observed the dynamic behavior of food webs and ecological response to disturbances. They ran controlled and natural experiments, beginning with small-scale interventions, testing and evaluating before expanding the idea to larger ecosystems. Much like the systems they study, ecologists are adaptive and change their approach as they learn more about how the system behaves and responds to change. […]

The economy is not a machine to be programmed or a windup toy to be set loose, but a garden to be tended. Careful intervention informed by pragmatic learning can tip systems into virtual cycles that outstrip our narrower aspirations of control.

A society that can’t get enough of work

Lily Meyer for The Atlantic looks at “French firebrand Paul Lafargue’s satirical 1883 pamphlet, The Right to Be Lazy.” (Their paywall is on and off circumventable, I was reading it yesterday and now can’t see it. Google the title or something.) Not actually about being lazy but about dreams of automation and new machines that would let workers work less but we always seem to end up working more and transforming our hobbies into side-gigs. The pull of capitalism and the culture of productivity is strong. Some people don’t have a choice but a lot of people could do less, and yet keep working like crazy, “all too often, life seems to contain little but working and recuperating from work.” Sounds familiar? Lafargue and Marx were talking about it 140 years ago and yet here we are, working nights and weekends.

Labor has transformed since the 1880s, yet culturally, many Americans still adhere to what Lafargue called the “dogma of work,” a belief that work can solve all ills, whether spiritual, material, or physical. […]

At a moment when hobbies too often turn into side hustles, and relaxation into conspicuous consumption, Lafargue’s concerns prompt broader reflection. Close to the end of The Right to Be Lazy, he describes a utopia in which workers spend nearly all their time lounging around. […]

A machine cannot enjoy its time off. We can, although productivity culture tells us otherwise. All too often, life seems to contain little but working and recuperating from work.

The imminent danger of AI is one we’re not talking about

I’ve got a few articles in the Algorithms, Automation, Augmentation box below and a secondary article just before that, and I skipped over a few other AI pieces while selecting for this issue. It’s still a fascinating topic but I feel I’m going to have to step away a bit, at least for this newsletter, it’s getting a bit redundant. I like the idea of ideas piling into a compost heap that eventually turns into something else, or at least a richer understanding, so I keep reading on the topic but it’s often slightly new angles that bring little tidbits of understanding, not necessarily new and valuable insights to be expanded up on here.

Still, Ezra Klein wrote an opinion piece at The New York Times and I think it’s worth a read because he brings a useful (small) lens. Basically; AIs “have been trained to convince humans that they are something close to human,” now they are being shoehorned into search engines, what’s the result for users when ‘search results’ are hustlers trying to convince them to buy something instead of trying to convince them they are close to human? So far, hilarity and unease, later on… your own personal ever-present used-car salesman?

“I tend to think that most fears about A.I. are best understood as fears about capitalism,” Chiang told me. “And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.” […]

The age of free, fun demos will end, as it always does. Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is. […]

“The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”

SPEAK NO EVIL → In another intriguing angle on AI, Drew Austin argues that “the physical world has already been cleaned up … with information receding from the landscape as the digital sphere absorbs it.” He proposes that “when more of the internet consists of computers talking directly to one another, whatever SEO still occurs will happen in the background; AI-generated objects will themselves be optimized, with less need for descriptive text—a condition foreshadowed by TikTok’s UX. The result of all this may still be ugly, but the reasons won’t be as obvious. It will likely feel less ‘messy’.” Sooo AIs talking to each other showing us less information and, per Klein, then using their convincing skills to see us stuff.

ALGORITHMS, AUTOMATION, AUGMENTATION → Kirby Ferguson’s really excellent AI and Image Generation (Everything is a Remix Part 4). ⊗ The internet of maps and oracles. AIs are portrayed as oracles but might be better understood as maps, as knowledge graphs. ⊗ Figure promises first general-purpose humanoid robot. ‘Let’s pick the design from movies where robots are evil and use it for our product.’ ⊗ Soft robots take steps toward independence. “Squishy robots can now heal themselves and grow as they explore.” ⊗ Cohere vs. OpenAI in the Enterprise: which will CIOs choose? “As generative AI moves into the enterprise, a company founded by ex-Googlers aims to out-perform Microsoft backed OpenAI.”


CURATION VS. CONSUMPTION → Short post by Kyle Chayka on curation and how everyone is now some form of curator. Interesting, but I think he’s missing part of the picture. “[C]uration is not really curation if it is solely directed at projecting an image of the self. Then it’s just narcissism.” Yep, but then he kind of throws the whole thing out and says he’ll focus on consuming, where I think curation as sense making or curation for thinking is still there. An individual who’s into fashion might ‘curate’ their wardrobe and project something for their own purpose. Selecting the products displayed in a shop is also curation but with an entirely different purpose. Blogging or tiktoking books to look smart or cool is a form of curation, filling a fantastic bookstore is something else entirely. (Granted, they might all be selling something but I think you get the point.)


PIRATES AND FARMERS → “Flaubert said you should be regular and orderly in your everyday life so you can be violent and original in your work, and that describes many of the writers I know. The only pirating we usually do is in the mind: reading books, searching the stacks — and sometimes the world — for new material.”

Asides

SHARE →If you enjoyed this issue, please consider sharing it by email or on social media. Here’s the link. Thanks!


No.256

MAR 19, 2023




THIS WEEK → Jungle Snooker and AI’s High Weirdness (Webb, Klein, Doctorow, and Evans on AI) ⊗ Apocalyptic Infrastructures ⊗ There’s Nothing Unnatural About a Computer ⊗ Over reliance as a service ⊗ Rewatch the GPT-4 developer livestream ⊗ Kottke dot org is 25 years old ⊗ Future Today Institute’s 2023 Tech Trends Report

This is an issue of the free weekly Sentiers newsletter. If it was forwarded to you, subscribe here. The newsletter is supported by paid members and supporters.

Jungle snooker and AI’s high weirdness

Last week I wrote about AI that “it’s still a fascinating topic but I feel I’m going to have to step away a bit, at least for this newsletter, it’s getting a bit redundant” and then GPT-4 arrived. I didn’t pay too much attention to the release over the week, but it was ‘ambiant,’ shall we say. I did bookmark a few things and save a good number of articles in Reader, which I sat down to read Friday morning. I started with the surprising ease and effectiveness of AI in a loop by Matt Webb (my highlights), then this changes everything by Ezra Klein (my highlights), then the AI hype bubble is the new crypto hype bubble by Cory Doctorow (my highlights) (pre GPT-4, to be fair), and finally language is our latent space by Jon Evans (my highlights). I was basically trying to triangulate a decent position/anticipation or where AI might be going and how quickly. (Sidenote: yes it’s all dudes, I try to balance issues and what I read in general, chips fell this way this week.)

If you’ve been reading me for a while, you might have noticed that some names come back regularly, the four above being among them. I don’t have an ordered list of who’s opinion I trust more, but I do have loose groupings of who’s thinking I respect. When people in the same grouping disagree, it’s both a good sign that I’m not toooo much in any echo chamber (even though those are mostly debunked, it’s still a useful image), and harder to make sense of.

In the articles above, Webb used ‘singularity’ in his url and quotes Vinge to end the post. He’s super impressed with GPT-4, LangChain, and ReAct. The rate of amelioration between versions, combined with the tools around them, makes him enthusiastic and optimistic about what’s coming. Klein seems to be roughly in the same group in terms of the scale and speed of what’s coming, while being very worried about the potential dangers and the slim chances that ‘we’ come up with appropriate policies in time. Doctorow basically doesn’t believe any of it, calls bs and focuses more on who’s creating these AIs and their usual biases and exaggerations, not a bad angle. Part of his argument is based on Timnit Gebru and Emily M Bender’s paper and thinking, including the interview in You are not a parrot (which I’m a bit ashamed to say I haven’t read yet).

As Bender says, we’ve made “machines that can mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it.”

Evans (the more technically adept of the four writers) uses this great image:

For the first time in a very long time, the tech industry has discovered an entirely new and unknown land mass, terra incognito we call “AI.” It may be a curious but ultimately fairly barren island ... or a continent larger than our own. We don't know.

You should have a read for his various metaphors of what LLMs actually do, which go a long way in understanding and setting expectations. The ‘money quote’ or most important takeaway from his piece:

The unreasonable effectiveness of LLMs, says me, stems from the fact that we have already implicitly encoded a great deal of the world’s complexity, and our understanding of it, into our language — and LLMs are piggybacking on the dense Kolmogorov complexity of that implicit knowledge.

I just listed the articles in the order I read them, a mix of serendipity and habits. It’s largely blind luck that I read Evans last because it’s the best conclusion to the quartet.

If you attach the four together; AI shows glimpses (sometimes more than glimpses) of great usefulness and massive disruption for certain tasks and jobs. This potential is maturing extremely fast and policy will likely not keep up. Still, be weary of the bullshit thrown left and right, who it’s coming from and their background. Perhaps LLMs work so well because of all the meaning we have embedded in our language, which might mean that it is one piece of a group of technologies to get (if ever) to AGI, not the technology in its infancy. ◼

ALGORITHMS, AUTOMATION, AUGMENTATION → Rewatch the GPT-4 developer livestream. ⊗ Overreliance as a service. ⊗ Thread by Emily M Bender found in the previous piece, worth its own link. ⊗ Anthropic introduces Claude. ⊗ Large language models are having their Stable Diffusion moment.

Apocalyptic infrastructures

Infrastructures have made a number of appearances in this newsletter, often emphasising the need for maintenance, usually with some awe and the goal of showing their importance. You could say that in this piece at Noema Laleh Khalili wraps all of that up with historical context, especially on the use of infrastructure in colonial power and as post-colonial tools, but also in placing infrastructures and their importance within the contemporary issues and opportunities around the climate crisis, degrowth, and inequality. ‘We’ need a new way of framing these projects, of thinking about who is included in decisions, and in the ultimate goals of these infrastructures. Great read.

Across political divides, all infrastructures share one common feature: their detrimental environmental effects. Dams destroy riverine ecosystems and leach the soil. Cement factories and coal-powered electricity spew out pollution across the globe. Sewer lines pour into sensitive riparian and coastal biospheres. Oil fields and pipelines contaminate vast swathes of land, leaking into fragile water tables. Data centers produce carbon dioxide and heat on a monumental scale. […]

The dilemma is how to provide a livable life and livelihood, health, education, basic utilities, clean air and clean water without hitching them to the zero-sum game of growth. Degrowth would entail slowing down fossil fuel consumption, arresting the constant drive toward the financialization of every aspect of life, and contracting the processes that produce waste. It asks of us all to consume less and more thoughtfully. […]

Infrastructures that would emerge out of an ideology of degrowth would incorporate a more redistributive, participatory and egalitarian ethos. And a strategy of degrowth would include ecological wellbeing as an immutable principle in all planning and use. […]

For infrastructure to work, for it to serve the public and steward the world’s air, water and soil for future generations, it has to be planned through more open, egalitarian and environmentally militant processes.

There’s nothing unnatural about a computer

At Grow, Claire L. Evans interviewing James Bridle about their book Ways of being to “take a fresh look at nature’s intelligence.” As with most Q&A formats, I struggle a bit to summarise so just have a read, especially as an introduction or further thinking on the ideas of more-than-human intelligence and various forms of intelligence. Especially relevant in this AI-heavy issue, the first highlighted quote below is especially noteworthy and is something I’ve mentioned a number of times; how AI can be interpreted, thought of, and perhaps guided through new lessons gleaned from animal and plant intelligences?

[G]ardening has allowed me to develop a more active awareness of the fact that we live in a more-than-human world. It’s a really powerful kind of awareness to have, and it’s precisely the kind of awareness that I would want to have about technology. Is making things the technological equivalent of gardening with code? […]

I have this very strong sense that one of the broader roles of AI in the present is really just to broaden our idea of intelligence. The very existence, even the idea of artificial intelligence, is a doorway to acknowledging multiple forms of intelligence and infinite kinds of intelligence, and therefore a really quite radical decentering of the human, which has always accompanied our ideas about AI — but mostly incredibly fearfully. […]

That’s why I’ve always had the fascination with all the glitches and weird edge cases and strangeness of AI. Because some of that is reflecting back whatever trash these things have been fed on. But it’s also genuinely presenting radically new ways of seeing the world that expands our view in ways that we don’t yet fully understand. […]

The reason this is urgent and a fight is because we are literally losing knowledge. Through habitat destruction, through climate change, through loss of biodiversity — it is knowledge that is being lost. But that process is hardly new. It’s been going on for centuries, if not millennia. It’s the main action of colonialism and imperialism.

Kottke.org is 25 years old today and I’m going to write about it

Happy anniversary and congrats to Jason on an incredible (and ongoing) ride through a good chunk of the history of the internet and much of the history of weblogs. I’ve been following along since Osil8 and it was an honour to guest edit a few times. I was definitely nodding my head reading the quote below, thinking of my own blogging and newslettering experience.

I had a personal realization recently: kottke.org isn’t so much a thing I’m making but a process I’m going through. A journey. A journey towards knowledge, discovery, empathy, connection, and a better way of seeing the world. Along the way, I’ve found myself and all of you. I feel so so so lucky to have had this opportunity.

Asides

  • Apply to join the upcoming cohort of Nervous System Mastery — learn evidence-backed protocols for cultivating calm, increase your capacity for focus and improve sleep. Limited seats available. Sponsored
  • 🌌 🔭 🤩 🇺🇸 NASA’s Webb Telescope Captures Rarely Seen Prelude to Supernova“The rare sight of a Wolf-Rayet star – among the most luminous, most massive, and most briefly detectable stars known – was one of the first observations made by NASA’s James Webb Space Telescope in June 2022. Webb shows the star, WR 124, in unprecedented detail with its powerful infrared instruments. The star is 15,000 light-years away in the constellation Sagitta.”
  • 📖 🤖 🤔 A monthly magazine to showcase the diverse creativity of the Midjourney community“Each magazine features a selection of artwork curated from the 10,000 most highly rated images, as well as interviews with Midjourney community members.”
  • 🏢 🤔 🇺🇸 So You Want to Turn an Office Building Into a Home? Long piece at The New York Times with some animations. “Cities are eager to do this amid rising remote work. But it’s harder than you might think.”
  • 🗺 ⚔️ 😍 How to Draw Fantasy World Maps“JP Coovert shows people how to draw maps for fantasy games, books, and other media.”
  • 💩 😖 Hey Cortana, what’s the worse move we can make? Microsoft lays off AI ethics and society team“The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.”
  • 🌱 🌿 🤔 🇬🇧 High-tech, soil-less, and brimming with life: the vertical farms of tomorrow - Positive News“White walls, spotlessly clean floors, bright lights and rows and rows of hardboard and metal frames, out of which spring tiny pots of plants – spinach, basil, all manner of leafy greens, herbs and spices. Those splashes of greenery aside, it looks more like a tech hub than a farm.”
  • ⛪️ 😃 🇫🇷 Notre-Dame Repair Reveals Another Historic First: 800-Year-Old Iron Reinforcements“Scientists working on the scorched interior of Notre-Dame de Paris have found iron was used in the cathedral’s construction in the mid-12th century. It’s an unexpected discovery that changes how researchers thought the church was built, and provides surprising insights on the iron trade in 12th-century Paris.”

SHARE →If you enjoyed this issue, please consider sharing it by email or on social media. Here’s the link. Thanks!


No.257

MAR 26, 2023 



THIS WEEK → Hard to Believe ⊗ The Metaverse Is Not a Place ⊗ The Case for Slowing Down AI ⊗ On generative AI, phantom citations, and social calluses ⊗ How tech companies reshape the economy ⊗ Brainstorm questions not ideas ⊗ Building a better future through imagination

This is an issue of the free weekly Sentiers newsletter. If it was forwarded to you, subscribe here. The newsletter is supported by paid members and supporters.

Hard to believe

Spencer Glendon of Probable Futures with a long and fascinating piece. He uses his experience in China over 20 years ago and humanity’s relationship to nature (via art in the Renaissance and China in the late sixteen hundreds, nutmeg, and beavers (!!)) to show that we have trouble stepping out of our mental and cultural models.

He gives the example of chief Asia economists from several global investment banks whom he spoke to. He asked when China’s economy would be larger than Japan’s, and asked for their twenty year GDP forecasts. Each person’s answers contradicted each other because, in his view, when stepping away from the numbers they couldn’t also break out of their mental model. They could see something in columns of numbers that they couldn’t wrap their head around otherwise. “China will grow at this rate” was acceptable but not “its economy will be bigger than Japan’s in 10 years.” Knowledge and belief collided.

If we are to address the climate crisis, an important part of the task is to develop and adopt new models, Glendon provides some examples and options which I won’t detail here. They fit with much of what I usually share here and I think reading the whole piece to then read them in that context is more valuable than summarising here.

[E]ach time I went, I came back more convinced that China wasn’t a buying or selling opportunity but rather a fundamental change to the workings of the global economy, global finance, and even the biology and chemistry of the planet. […]

Nature is now in serious peril because, as Ghosh describes, almost every nation, no matter how abused it was by colonialism, has now adopted the models—both the quantitative economic ones and the philosophically, human-centric moral ones—that left such scars on both cultures and landscapes around the globe. […]

I have come to the conclusion, however, that given the urgency of climate change and the dominance of a single global framework, we should concentrate on improving existing models and bringing different models together. […]

[C]limate change “is an issue that we continue to understand primarily through a scientific framework, and yet it’s an issue that cannot primarily be addressed through a scientific framework. It’s a political issue, it’s a social issue, it’s an issue of justice, it’s a philosophical issue, so there is a deep, deep need to bring together the sciences, the humanities, the arts, and the social sciences, bring them into dialogue, collaborate, and work across the disciplinary boundaries that keep apart these different kinds of knowledge.

FUTURES, FORESIGHTS, FORECASTS & FABULATIONS → Building a better future through imagination (UNHCR Innovation Service) ⊗ Time is of the essence“Everyone has their biases. Even the futurists.” ⊗ Mare Nostrum, a short story by Bruce Sterling. ⊗ From the IFTF, How to decide which foresight tools to use on a project.

The metaverse is not a place

This one by Tim O’Reilly was in the pile to read for months and, thanks to client research, I finally got around to it. It’s an excellent one. He proposes that the metaverse is not a place but rather a medium of communication. In addition to the main argument, notice the idea of “stored time” as a way of thinking about recorded video or audio, and there are a couple of nice tidbits about spotting signals and trends.

O’Reilly also thinks that Zuck’s vision based on avatars is probably wrong, at least for now. He believes that video presence in a 3D environment is more technically feasible and more effective. I wonder if a mix of video and TikTok-style filters is not the next phase/option/version of the 3D avatar. He also talks about The Out-of-office World, a presentation by the mmhmm CEO, which is really worth watching.

In the end, it’s probably more of a metaverse 0.5 than an either-or. Perhaps the metaverse as a place, as in the dreams of Stephenson, Zuckerberg, and others, is further away, but an interim version can be interesting today(ish) as a mode of communication where 3D is more of a backdrop for an exchange, rather than a place to visit.

It’s useful to look at technology trends (lines of technology progression toward the future, and inheritance from the past) as vectors—quantities that can only be fully described by both a magnitude and a direction and that can be summed or multiplied to get a sense of how they might cancel, amplify, or redirect possible pathways to the future. […]

On the other hand, creating a vast library of immersive 3D still images of amazing places into which either avatars or green-screened video images can be inserted seems much closer to realization. It’s still hard, but the problem is orders of magnitude smaller. The virtual spaces offered by Supernatural and other VR developers give an amazing taste of what’s possible here. […]

Bots and deepfakes are already transforming our social experiences on the internet; expect this to happen on steroids in the metaverse. Some bots will be helpful, but others will be malevolent and disruptive. We will need to tell the difference. […]

You can continue this exercise by thinking about the metaverse as the combination of multiple technology trend vectors progressing at different speeds and coming from different directions, and pushing the overall vector forward (or backward) accordingly. No new technology is the product of a single vector.

The case for slowing down AI

I can’t help but think this could have been more but Sigal Samuel at Vox still does a good job of making the beginning of a case for slowing down the development of AI and going over three of the main objections to doing so. I wasn’t necessarily expecting Samuel to make that case in that media but part of the resistance (and probably resistance to write about it) comes down to ‘this is the US’ and a variation on Fisher’s “It’s easier to imagine the end of the world than the end of capitalism.”

Companies are running wild at high speed on AI because they have the means to do it via shit antitrust legislation and application, because they’ve stripped researchers away from universities while sucking the life out of ‘collaborations’ with research labs, and because they are purposefully hoping to get too far ahead of regulation while waving around a supposed will to be regulated.

It would also be conceivable to slow down implementation and not slow down research or slow it down less. The problem is not finding new things, it’s throwing them in products with little or no reflection, planning, or care for society.

In fact, though, there are lots of technologies that we’ve decided not to build, or that we’ve built but placed very tight restrictions on — the kind of innovations where we need to balance substantial potential benefits and economic value with very real risk. […]

So we shouldn’t be so quick to write off consensus-building — whether through technical experts exchanging their views, confidence-building measures at the diplomatic level, or formal treaties. After all, technologists often approach technical problems in AI with incredible ambition; why not be similarly ambitious about solving human problems by talking to other humans? […]

She analogized it to the early advice we got about the Covid-19 pandemic: Flatten the curve. Just as quarantining helped slow the spread of the virus and prevent a sharp spike in cases that could have overwhelmed hospitals’ capacity, investing more in safety would slow the development of AI and prevent a sharp spike in progress that could overwhelm society’s capacity to adapt.

ALGORITHMS, AUTOMATION, AUGMENTATION → OpenAI is massively expanding ChatGPT’s capabilities to let it browse the web and more. There’s a short, quite 🤯 demo where it books a table, finds a recipe, calculates calories, and prepares the Instacart order for it. ⊗ Also very 🤯 is the upcoming Wonder Studio“An AI tool that automatically animates, lights and composes CG characters into a live-action scene.”


ON GENERATIVE AI, PHANTOM CITATIONS, AND SOCIAL CALLUSES → “It has always seemed to me that Negroponte’s abundant digital optimism was rooted in a naive misunderstanding of what it means to be living in the early times. There is a habit among tech futurists to imagine that the future of a technology will be like the present, only much larger-scale, and with all the bugs worked out. But instead, it turns out that the technology’s future is barely like the present, because reaching a larger scale creates an entirely separate set of problems.


HOW TECH COMPANIES RESHAPE THE ECONOMY → Paris Marx gives an overview of a recent report by Moira Weigel on Amazon’s third-party sellers and how they are shaped by the giant. He makes a parallel with the creator economy and the platforms many creators use for their work.


BRAINSTORM QUESTIONS NOT IDEAS → “A quick and effective fix is to stop brainstorming ideas with your team, and start brainstorming questions instead. Getting together and listing every question you can think of about a problem, a process, or a situation is uncomfortable at first, and then in very short order enhances collaboration, decreases risk and puts you on the path to being a learning organization.”

Asides

  • 🤬 🌳 🌊 ⚡️ Props to them for still pushing… Scientists deliver ‘final warning’ on climate crisis: act now or it’s too late“Scientists have delivered a ‘final warning’ on the climate crisis, as rising greenhouse gas emissions push the world to the brink of irrevocable damage that only swift and drastic action can avert.”
  • 🎥 😍 🇨🇦 I loved watching this! The attention to detail, the craft! (I’ve been following him since 1991 with La Course Europe-Asie so I might be biased.) Dune Director Denis Villeneuve Breaks Down the Gom Jabbar Scene.
  • 🎥 😍 🇬🇧 Roger Deakins Breaks Down His Most Iconic Films“Do you want to sit in on a 30-minute cinematography masterclass with Roger Deakins as he talks about the process behind some of his most iconic films? We’re talking Sicario, The Shawshank Redemption, 1917, Fargo, Blade Runner 2049, and No Country for Old Men here. Of course you do.”
  • 🎥 🕵🏼‍♀️ 😃 🇫🇷 Amélie Was Actually a KGB Spy“Jean-Pierre Jeunet, director of the 2001 romantic comedy The Fabulous Destiny of Amélie Poulain, has recut his beloved movie into a cheeky short film that reveals that Amélie was actually a KGB spy.”
  • 📸 🤖 😍 People like to joke at the idea of “prompt engineering” as a skill but meanwhile some artists are doing masterful stuff, like Planet Fantastique.
  • 👽 📚 🤩 The ones I’ve read in the Most Influential Sci-Fi Books Of The Past 10 Years are great so I’m going to assume the rest of the list is also excellent.
  • 🚲 ⚡️ 😃 We are living in a golden age of electric cargo bikes“Our cup runneth over with affordable and high-powered options thanks to recent launches from Rad Power Bikes, Aventon, and Lectric, as well as legacy manufacturers like Trek and Specialized.”

SHARE →If you enjoyed this issue, please consider sharing it by email or on social media. Here’s the link. Thanks!




+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

- Bloggery committed by chris tower - 2303.31 - 10:10

- Days ago = 2828 days ago

- New note - On 1807.06, I ceased daily transmission of my Hey Mom feature after three years of daily conversations. I plan to continue Hey Mom posts at least twice per week but will continue to post the days since ("Days Ago") count on my blog each day. The blog entry numbering in the title has changed to reflect total Sense of Doubt posts since I began the blog on 0705.04, which include Hey Mom posts, Daily Bowie posts, and Sense of Doubt posts. Hey Mom posts will still be numbered sequentially. New Hey Mom posts will use the same format as all the other Hey Mom posts; all other posts will feature this format seen here.