Creative Workflow Roundup: OpenAI’s o3-Mini, DeepSeek’s AI Expansion, Copyright Clarity, and the Future of Video & Image Models
In This Week’s Roundup: OpenAI’s o3-Mini is reshaping coding for creatives, making AI-powered development more accessible than ever. DeepSeek is expanding beyond text AI with Janus-Pro, a move into image generation that signals growing competition with Western AI models. The U.S. Copyright Office has provided long-awaited clarity on AI-assisted content, confirming that AI-enhanced works remain protected under copyright law. Meanwhile, Overlap has integrated o3-Mini for AI-powered video editing, and Krea Chat is introducing a new way to interact with media generation tools.
Beyond that, Pika 2.1 is delivering high-resolution AI video with new VFX tools, Alibaba’s Qwen 2.5 Max is pushing multimodal AI even further, and Mistral Small 3 is proving that smaller models can match or outperform much larger ones. AI filmmaking tools like Storyblocker V2 and Channel42 are advancing text-to-video, text-to-3D, motion capture, AI-driven scene creation, and more!
OpenAI Releases o3-Mini: A Major Upgrade for Fast, Intelligent AI Workflows
The News: OpenAI has officially launched o3-Mini, a fast, cost-effective reasoning model now available in both ChatGPT and the API. It delivers significant improvements over its predecessor, o1-Mini, with function calling, structured outputs, and three adjustable reasoning levels. Performance is noticeably stronger. And with a 100K token output limit, it’s built for handling extended, complex tasks.
Pro users get unlimited access, while Plus and Team users benefit from triple the rate limits compared to o1-Mini. Free-tier users can also try it in ChatGPT by selecting the "Reason" button under the message composer. Additionally, o3-Mini now integrates search capabilities, allowing it to retrieve up-to-date information with direct links to sources. OpenAI has also released o3-Mini-High, a more advanced version optimized for deeper reasoning and especially strong at coding. OpenAI has emphasized that the models underwent rigorous safety testing before deployment.
Lab Notes: OpenAI is underselling how good o3-Mini is. It’s ridiculously effective at coding, far beyond what I expected. In real-world use, it feels closer to o1-Pro than o1-Mini.
Large language models (LLMs) have already transformed creative work, helping with everything from brainstorming and story development to assisting with video editing workflows. But until now, coding has been something of a separate skillset—powerful, but requiring (at least a small amount) technical expertise, even with access to previous models. o3-Mini changes that.
I’ve already started developing my own creative software using o3-Mini—something I never thought I’d be able to do. I started running my code in ChatGPT Canvas (you currently have to switch to 4o to enable Canvas, but I pasted code from 03-mini).
As 03-mini solved errors and taught me more about development, I quickly graduated to Cursor and leveraged both 03-mini and Claude 3.5 sonnet.
No formal coding background, no deep math expertise, yet here I am, actually building and deploying software (within 2 days) that works. I’ve already integrated the software I developed into my workflow, and it’s all working seamlessly. I’m going to make a tutorial that covers my workflow and release it here for Lab subscribers ASAP.
I expect this to spark a bigger trend as more creatives start realizing they can build their own tools instead of waiting for someone else to make them. And from what I’ve seen so far, o3-Mini is more than up to the task.
o3-Mini Now Available on Overlap for AI-Powered Video Editing
The News: Overlap, a company specializing in multimodal AI agents for video production, has already integrated o3-Mini into its platform. This expansion allows o3-powered agents to assist with video search, clip generation, and content analysis. Overlap’s tools are designed to understand and automate video workflows, making it easier to locate specific moments, generate highlights, and process large video libraries efficiently.
Lab Notes: Overlap just popped onto my radar, thanks to its o3-Mini integration announcement, but this is exactly the kind of tech creative producers need to stay on top of. There’s a big industry for freelance creatives built around podcast clip editing—Overlap’s AI tools could make a lot of those roles redundant.
That said, we’ll still need human creativity for goal-setting, nuance, and validation (as I frequently mention). But this shift is going to make the space much more competitive. I haven’t used Overlap yet, but I’m definitely planning to try it. I want to see how well o3-Mini handles pulling meaningful clips from long-form content—because if it’s as good as expected, it’s going to shake up video and podcast editing workflows.
DeepSeek Expands Beyond R1 with Janus-Pro AI Image Generation
The News: After shaking up the AI industry with the release of R1, DeepSeek is pushing even further. The Chinese startup introduced Janus-Pro, a new AI image-generation model designed to rival DALL·E 3 and Stable Diffusion.
While DeepSeek’s reasoning AI made waves by proving that high-performance models can be built with significantly lower compute costs, their foray into image generation signals a broader strategy—one that aims to challenge Western dominance across multiple AI domains. Janus-Pro is the latest move in what is quickly becoming a global AI arms race, and as competition in the reasoning space heats up, the same is now happening with AI-generated visuals.
Lab Notes: DeepSeek’s R1 launch was a seismic event—one that I already covered from creative pro’s perspective in depth here. The key takeaway? AI access is shifting fast. Open-source, high-reasoning models are no longer locked behind paywalls (o1-mini, even better than R1 and also available for free, is another example of this shift).
But when it comes to Janus-Pro, the reality is a little different. Right now, DeepSeek’s image-generation tech still lags behind industry leaders like Midjourney. That said, it’s still worth keeping an eye on—especially as these technologies continue to converge.
With DeepSeek aggressively expanding into multiple AI categories, I expect the next big battle won’t just be about which models are best—it’ll be about who can integrate them into workflows most effectively.
Copyright Office Clarifies AI’s Role in Creative Workflows
The News: The U.S. Copyright Office has reaffirmed that AI tools can assist in the creative process without undermining copyright protections. In a newly released 41-page report, the office confirmed that works incorporating AI-generated elements can still be copyrighted, as long as a human author plays a creative role in selecting and arranging those elements.
This ruling provides much-needed clarity for industries like film and TV, where AI is increasingly used in post-production workflows—from de-aging actors to removing unwanted objects from shots. The Motion Picture Association (MPA) welcomed the decision, emphasizing that AI-enhanced tools are already benefiting filmmakers and should not face unnecessary legal barriers.
However, the Copyright Office drew a hard line on fully AI-generated content that has no human-made changes after generation. This report is the second in a series of three. The first addressed AI-generated voice and likeness replication, and a third will tackle whether AI models should be allowed to train on copyrighted material without a license—a debate with major implications for AI development moving forward.
Lab Notes: This is huge news for creatives who use AI tools. While there are still plenty of legal gray areas, this decision makes one thing clear: AI-assisted workflows are here to stay.
Hollywood and beyond is on the verge of an AI-driven transformation. This ruling gives studios and independent producers alike the green light to embrace AI in post-production, without worrying that it will invalidate copyright claims.
At the same time, the Copyright Office’s stance on fully AI-generated content sets a clear boundary. AI is an assistive tool, not a replacement for human creativity—and that’s a distinction that will continue to shape legal frameworks as AI evolves.
The bigger legal battles—like AI training on copyrighted content—are still ahead, and those decisions could have far-reaching consequences for creative industries. But for now, if you’re using AI to enhance, refine, or accelerate your work rather than replace human creativity, you’re on solid ground.
This commentary reflects my perspective and research. It is not intended as legal advice. If you have specific questions about copyright or AI usage, consult a legal professional.
Krea Chat: A New AI Interface for Image and Video Creation
The News: Krea AI has announced Krea Chat, a new tool that brings the full power of Krea’s image and video generation features into a chat-based interface. Powered by DeepSeek, Krea Chat is designed to streamline the creative process by allowing users to interact with AI in a more dynamic and intuitive way. Instead of juggling multiple tools or refining prompts manually, users can generate, iterate, and refine creative outputs directly within a single, fast-moving chat system.
While full details on Krea Chat’s rollout are still emerging, the move highlights an ongoing trend—AI tools are becoming more integrated, conversational, and built around fluid creative workflows rather than isolated model interactions.
Lab Notes: A lot of producers already use custom GPTs, Claude projects, or even basic scripting to refine AI-generated prompts for image and video creation. But what if you could do all of that inside a single, high-speed interface? That’s exactly what Krea AI is aiming for with Krea Chat.
I haven’t tested Krea Chat yet—I’ll report back once I get hands-on. This space has been developing quickly, and it’s fascinating to see new AI tools rethinking creative production workflows.