ElevenLabs v3 with Emotion Tags, Luma Modify Video, Veo 3 Everywhere, Claude Code CLI, Cursor 1.0, and More
Creative Workflow Roundup: No fluff, no sponsors, no affiliate links, just this week's key AI + creative tech news and my unfiltered lab notes.
In This Week’s Roundup: ElevenLabs v3 speaks seventy languages with fresh emotion tags, Luma’s Modify Video lets you reshoot in post, and Veo 3 spreads from Replicate to Krea and Freepik with full audio. PlayAI delivers word level speech edits, CapCut quietly expands backgrounds, and Cursor 1.0 arrives with a built in junior engineer. Claude Code CLI joins the affordable Pro tier, while Wondercraft, MagicPath, Bland TTS, and OpenAI MCP connectors keep producers iterating faster than ever.
ElevenLabs v3 Alpha Adds Expressiveness
• Need to Know: Eleven v3 alpha covers seventy languages, multi‑speaker dialogue, and emotion tags.
• Lab Notes: The quality of this new voice model is great. ElevenLabs continues to lead the audio-gen space.
Luma Introduces Modify Video
• Need to Know: Luma’s Modify Video restyles characters, look, or setting after footage is shot.
• Lab Notes: Reshoot/restyle in post suddenly becomes easier.
Google’s Veo 3 Video Model Reaches Krea and Freepik
• Need to Know: Krea and Freepik added Veo 3, including sound output.
• Lab Notes: One model, omni‑platform distribution strategy.
Veo 3 Goes Live on Replicate
• Need to Know: Veo 3 launched on Replicate with API access for text‑to‑video.
• Lab Notes: Competition heats up against Runway, Luma Labs, Kling, etc.
Karpathy Lauds Veo 3
• Need to Know: Andrej Karpathy praised Veo 3 video/audio output.
“Very impressed with Veo 3 and all the things people are finding on r/aivideo etc. Makes a big difference qualitatively when you add audio.”
• Lab Notes: Veo 3 is a very powerful modal, a new era for AI video technology.
PlayAI Offers Fine Grained Speech Editing
• Need to Know: PlayAI unveiled a speech editor that lets producers swap words, pacing, and emotion inside real audio.
• Lab Notes: Precise speech editing is impressive.
CapCut Adds AI Background Expand
• Need to Know: A hidden CapCut feature now expands video backgrounds for re‑framing shots.
• Lab Notes: A reminder that you can do a lot with CapCut!
Cursor 1.0 Gains Memory and Autoreview
• Need to Know: Cursor is finally a 1.0 product!
“This release brings BugBot for code review, a first look at memories, one-click MCP setup, Jupyter support, and general availability of Background Agent.”
• Lab Notes: Everyone can have their own junior engineer, now an AI background process.
Claude Code CLI Now in Pro Plan
• Need to Know: Anthropic added Claude Code CLI (Command Line Interface) to its twenty‑dollar Pro tier, enabling bigger context and better code analysis/generation.
• Lab Notes: Tighter limits, but it’s affordable!
Wondercraft Convo Mode Automates Podcasts
• Need to Know: Wondercraft’s Convo Mode spins up multi‑voice podcast episodes without recording.
• Lab Notes: Zero‑mic talk shows become a template away.
MagicPath Generates iOS 19 Inspired UI
• Need to Know: MagicPath AI instantly drafted an iOS 19 style dock on request.
• Lab Notes: Rapid UI prototypes shrink client approval cycles.
OpenAI MCP Connectors Unlock Proprietary Data
• Need to Know: OpenAI rolled out Model Context Protocol connectors so ChatGPT can query internal databases with existing permissions.
• Lab Notes: Enterprise chat suddenly speaks fluent company lore.
Bland Releases Hyper Realistic TTS
• Need to Know: YC spotlighted Bland’s new text‑to‑speech, which edges closer to broadcast quality.
• Lab Notes: Voice‑over jobs face fresh pressure.
Runway CEO Presses Hollywood on AI Video
• Need to Know: The Verge profiled Runway CEO Cris Valenzuela urging studios to adopt generative video workflows.
• Lab Notes: Cultural buy‑in is the next bottleneck, not pixels.
Not Boring Camera App Unveiled
• Need to Know: Andy Allen revealed the Not Boring Camera after one hundred prototypes and three years of development.
• Lab Notes: Expect cinematic‑look filters without DaVinci Resolve.
Captions Launches Mirage Studio Omni Model
• Need to Know: Captions’ new Mirage Studio generates actor‑driven videos via a proprietary omni‑modal foundation model.
• Lab Notes: I have not tried this, but it looks interesting.