#013 - 2026/02/04
A selection of what I've read this past week.

My main newsletter, Complex Machinery, includes a section called "In Other News..." It's where I list one-liners about interesting articles that didn't fit into any segments.
You can think of this list as a version of In Other News, but with a wider remit than Complex Machinery's "risk, AI, and related topics."
A special thanks to people who sent me interesting links last week!
Above the fold
- While music artists are leery of genAI, the studios are largely in favor. I wonder why. (Financial Times)
- You know that running (half-)joke, that every genAI use case begins with the assumption that you're an idiot? Plans for agentic AI shopping reinforce that idea. (Financial Times)
- If you think iOS's UI/UX has gotten worse, this designer agrees with you. And they bring receipts. (The Conversation)
- On K-pop's massive growth and hazy definition. (NPR)
- OpenAI decides to retire the "4o" model that was allegedly linked to emotional derailment and suicides. (No word on whether the company will, y'know, shore up safety and risk management efforts on its remaining models.) (Futurism)
- In the latest episode of Companies Can't Seem To Protect Their Data (Or Perhaps They Don't Care To Do So), a digital photo booth has exposed end-users' pictures and other data. (404 Media)
- Same idea, but a genAI-toy company left kids' chat logs and other information wide-open. (Ars Technica)
- Self-driving cars susceptible to injection attacks by … road signs. (The Register)
A Twitter trio
- A group of journalists tested Grok and … it's still cranking out Very Bad Things. (Reuters)
- Related, French authorities paid a little visit to Twitter's offices. That whole "widespread creation of nonconsensual sexual imagery" didn't go over so well in l'Hexagone. (Le Monde 🇫🇷 , BBC)
- It gets worse: apparently, the move to generating riskier content was part of a revenue-growth strategy. (Washington Post)
The rest of the best
- Meta turns to TV spots to change public opinion on its datacenter projects. (New York Times)
- Twitter is not the only company engaged in creating nonconsensual deepfakes of women. Civitai is (allegedly) also on that train. (MIT Technology Review)
- Someone ran a study on how genAI chatbots can lead people to harm. (Ars Technica)
- AI company Anthropic looked into the impact of AI code assistance on skill development. (Anthropic)
- An unexpected knock-on effect of genAI: software companies are seeing a credit crunch. (Bloomberg)
- Video game studio Ubisoft announces a real shake-up of a restructuring effort. (The Guardian, WSJ)
- CEO of AI company Anthropic raises the alarm on … potential AI harms. (The Guardian)
- The rise of genAI-based search starts to eat away at web publisher's traffic. (Le Monde 🇫🇷, Les Echos 🇫🇷)
- You may have heard of Moltbook, the so-called social network for genAI bots. A security flaw left all of the site's bots at risk of external control. (404 Media)
- How lawn-care company Husqvarna has transformed itself again. (Branding Strategy Insider)
- The payments processing field has a long and storied history with adult content. So it's weird that they didn't take action when Grok turned into a large-scale CSAM engine a couple of weeks ago. (The Verge)
Did I miss anything?
Have something I should read? Send the link my way.
Don't miss what's next. Subscribe to In Other News...: