#020 - 2026/03/25
A selection of what I've read this past week.

My main newsletter, Complex Machinery, includes a section called "In Other News..." It's where I list one-liners about interesting articles that didn't fit into any segments.
You can think of this list as a version of In Other News, but with a wider remit than Complex Machinery's "risk, AI, and related topics."
Above the fold
- Richard Bookstaber is a long-time risk manager and author. He has, as the kids say, "seen things." And now he's comparing the dawn of the 2008 financial crisis to what's brewing in the genAI space. I don't often use the phrase "required reading" but this is required reading. (New York Times)
- Prediction markets like Polymarket and Kalshi have turned bettors into the trader trope of Stressed-Out Person Staring At Several Screens All Damned Day, "monitoring the situation." (Bloomberg)
- The twisted connections between social media, our lives, AI psychosis, and the genAI-industrial complex. (Disjunctions)
- People have used generative AI to create deepfake images, revenge porn, and clones of themselves. Most recently, someone has created a synthetic right-wing American soldier. And it's doing numbers on social media. (Washington Post)
- Sports gamblers, when they lose bets, have been known to threaten athletes. Now that prediction markets let you bet on everything, it logically follows that the bad behavior will follow. Like, say, a journalist facing threats because of their classification of a missile strike. (Times of Israel, Der Spiegel 🇩🇪)
- Open-source intelligence (OSINT) has been used to identify missile strikes, military officials, and rogue business owners. At least one criminal gang has used it to track and kill a rival. (OCCRP)
- This one's a few months old but still worth a read. The team from OpenAI published a paper which notes that LLMs are pretty much guaranteed to emit nonsense now and then. While this is hardly news – industry experts, myself included, have pointed this out before because it's quite literally how LLMs work – it's kind of the OpenAI crew to have put it in a formal research paper that people can cite. (ComputerWorld)
- Deloitte offers two frameworks for how to approach Black Swans and similar shocks. (Deloitte)
- Here's the latest example of Person With A Sensitive Job Leaks Their Location Through The Strava App: someone in the French military giving up the position of their aircraft carrier. This one's in French, but Le Monde offers an English translation at the top of the article. (Le Monde 🇫🇷)
The rest of the best
- Tired of staring at your Polymarket bets on your phone? Why not head to a Polymarket-themed bar so you can watch along with everyone else? Remember: if you are betting in a group setting, it's "being social" and not "problem gambling." (Japan Times)
- Today in Possible Conflicts of Interest, Or Perhaps Just Laughably Self-Serving: Jensen Huang, CEO of genAI hardware company Nvidia, says that software developers should be spending the equivalent of half of their salary on LLM tokens. (Business Insider)
- Double-check your Zoom settings. A company is turning video meetings into AI-generated podcast fodder and hosts are only finding out well after the fact. (404 Media)
- In response to – shall we say – "vigorous customer feedback," Microsoft is pulling back on some AI-based features they'd planned for WIndows. (TechCrunch , Ars Technica)
- Police in Essex have put a facial recognition project on hold. If you think you know the reason why(te), you would be correct. (The Register)
- Remember how the metaverse was going to change everything? Oh wait, it didn't. And now Faceb– sorry, "Meta" is all but withdrawing from the field (in order to chase genAI, I suppose). (New York Times, Der Spiegel)
- Armed with genAI tools, people are creating denial-of-service (DoS) attacks on the court system. (Futurism)
- Yet another case of "company accidentally leaks chatbot interactions." This time, it's Sears. (Wired)
- Funny what happens when you ask actual AI experts – not members of the hype crew – about suitable tasks for AI. In short: low-stakes, and verifiable. (The Guardian)
- The UK backs down (somewhat, for now) on letting AI companies use artists' copyrighted work. (The Guardian, Les Echos)
- The return of BTS is all about the shows … and the merch. (WSJ)
- The Verge interviewed the CEO of Superhuman, the company that briefly gave us Grammarly's Unpaid Impersonation of A Contemporary Writer feature. The discussion is every bit as surreal as you'd expect. (The Verge)
- Popular genAI image tools default to caucasian faces and need extra coaxing to show people of other cultures. (Le Monde 🇫🇷)
Did I miss anything?
Have something I should read? Send the link my way.
Don't miss what's next. Subscribe to In Other News...: