#022 - 2026/04/08
A selection of what I've read this past week.

My main newsletter, Complex Machinery, includes a section called "In Other News..." It's where I list one-liners about interesting articles that didn't fit into any segments.
You can think of this list as a version of In Other News, but with a wider remit than Complex Machinery's "risk, AI, and related topics."
Above the fold
- Finance professionals often refer to inexperienced investors as "dumb money" – people who don't have enough information to realize they're entering into a bad trade. And if you thought retail investing was full of dumb money, wait till you look into prediction markets. (FT)
- Streaming, games, and social media have all weakened Bollywood's business model. (FT)
- To combat robots and robot-trained opponents, chess grandmasters try out-of-the-box moves. (Bloomberg)
- With so much online fraud, it's easy to forget that plenty of criminals still work in the physical realm. Cargo trucks make for particularly juicy targets. (The Guardian)
- Slowing the spread of datacenters has become a widespread, bipartisan issue across the US and Canada. (Ohio Capital Journal, Futurism, National Observer, CTV News, WSJ)
- I first read this article on using AI years ago, when "AI" meant something else. Today it highlights one key divide between ML/AI and genAI: the former had a clearer purpose, which made it easier to find use cases. (Strategy & Business)
- Plenty of publicly-hosted AI models share details about performance and intended use cases. Shouldn't models used for serious work, such as CSAM monitoring, follow that example? (Tech Policy Press)
- People running hospitals say they're ready for genAI to replace trained medical professionals, starting with radiology. Said medical professionals disagree. (Futurism)
- I missed this in January: consultancy PwC has released the results of its global CEO survey. (PwC)
The rest of the best
- Small sellers turn to AI tools to shrink product research timelines. (MIT Technology Review)
- Some data-labeling work is looking more like data-scraping work. (The Guardian)
- Fun fact: just because someone tells you that their app is secure, does not mean the app is secure. The latest example? TeleGuard apparently breaks all kinds of rules around encrypted messaging. Top of the list: storing end-users' private encryption keys on TeleGuard's servers. (404 Media)
- An unexpected twist of MLB rolling out robot umpires: the bots' precision math has revealed some players' true heights. (WSJ)
- Companies that had focused on insuring against terrorism risks have been bitten by the surprise war in Iran. (WSJ)
- Target is about to integrate Google's Gemini AI for agentic shopping. And it'll pass the risk to customers. (Business Insider)
- Speaking of TOS fun: Microsoft says that Copilot is for "entertainment purposes only." Sure. Sure. (Tom's Hardware)
- OpenAI bought itself a podcast. I'm sure this has nothing to do with trying to sway public opinion on AI. Not one bit. (Les Echos 🇫🇷 , CNBC)
- People still treat well-structured arguments and em-dashes as signs that genAI wrote something. (Le Monde 🇫🇷)
- The rule of thumb for genAI is that an experienced person should review the outputs. The catch? People may cave to bots due to "cognitive surrender." (Ars Technica)
- It's said that you can't prove a negative. So how do you prove that you're not a genAI doppelgänger? (BBC)
Did I miss anything?
Have something I should read? Send the link my way.
Don't miss what's next. Subscribe to In Other News...: