Creative Workflow Lab

Creative Workflow Lab

Share this post

Creative Workflow Lab
Creative Workflow Lab
GPT-4.5 Testing, Luma’s Ray2 Flash, FLORA AI Workflows, and a Wave of New AI Video & Voice Models
Copy link
Facebook
Email
Notes
More

GPT-4.5 Testing, Luma’s Ray2 Flash, FLORA AI Workflows, and a Wave of New AI Video & Voice Models

Creative Workflow Roundup: No fluff, no sponsors, no affiliate links, just this week's key AI + creative tech news and my unfiltered lab notes.

Shannon Leonard's avatar
Shannon Leonard
Mar 09, 2025
∙ Paid
Share

In This Week’s Roundup: GPT-4.5 is now widely available, and my testing confirms its strengths in creative workflows. Luma Labs keeps pushing video generation forward with Ray2 Flash. FLORA is offering a new way to structure AI-powered workflows. And beyond that, we’re seeing a huge wave of new video models, voice tools, and AI-powered editing systems. Plus: Pika, Runway, ElevenLabs, Freepik, and other generative media platforms all rolled out major updates, so if it feels like AI tools are evolving faster than ever, that’s because they are!

GPT-4.5 for Creative Workflows: Rollout and My Initial Testing

The News: OpenAI completed the rollout that started last week and made GPT-4.5 available to Plus subscribers. And reactions have continued to be mixed, though I’ve noticed more people are recognizing its creative capabilities. After launching at the expensive Pro tier last week, it’s now much more accessible at $20/month. In my testing, I’ve found that it’s great at pattern recognition and creativity, delivering better writing and more nuanced responses (more on that in my lab notes below). It hasn’t replaced other models across the board. Sonnet 3.7 is still the best coding model, and reasoning models like o1 or o3-mini still perform better in certain logic-heavy tasks.

At the same time, there’s growing speculation about GPT-5. All we know is that OpenAI’s CEO Sam Altman previously indicated that GPT-5 will be similar to a hybrid reasoning model that combines the capabilities of multiple types of LLMs, possibly launching in the next few months. Now for creative professionals, the real question is whether GPT-4.5 meaningfully improves workflows today.

Lab Notes: I’ve been using generative AI models like GPT-3 in my workflows since 2021 and even experimented with a fully AI-written comedy series in December 2022, the month after ChatGPT was first released. So after testing GPT-4.5 over the past few days, I’ve been able to compare it directly to previous models.

What stood out is how well it performs in creative tasks. I’ve gotten some profoundly unique outputs, much better than GPT-4o for short-form writing, ad copy, and ideation. That said, I’ve found all generative models are still fairly unpredictable and different models handle different tasks in different ways. I will still use 4o (and other models like Claude 3.7) alongside 4.5, especially since there is currently a limit for Plus users set at approximately 50 messages per week. As I wrote about last week, GPT-4.5 is available to Pro users with higher limits and via API.

GPT-4.5 is noticeably better at ideation and creative writing when prompted thoughtfully. The key is working with it and regenerating responses, refining prompts, and iterating to get the best results. It’s not artificial general intelligence (AGI), but it’s a big step forward in creative AI.

The pace of change in AI is faster than ever, and keeping up is part of the challenge. Right now, GPT-4.5 is worth experimenting with (more than just a few test prompts) for creative work. API pricing is still high, but now Plus users get reasonably generous access, even with some limitations, making GPT-4.5 one of the most advanced creative models widely available.

Luma AI’s Ray2 Video Model: Keyframes, Loops, and Faster Generation

The News: Luma Labs AI has released an update to its Ray2 video model available in Dream Machine, adding new tools that give producers more control over AI-generated content. The new Keyframes feature lets users define the first and last frame of a sequence, enabling smooth transitions, structured story beats, and spatial exploration. The update also introduces Extend, which allows videos to be lengthened to continue a sequence, and Loop, which generates seamless looping videos.

Luma has also launched Ray2 Flash, a new model that is three times faster and three times cheaper than previous versions. It maintains the high-quality text-to-video, image-to-video, audio, and control features of Ray2 while reducing cost and improving generation speed.

Lab Notes: Solid updates! Luma’s Ray2 video models and Dream Machine creative interface continues to keep up with other leading AI video tools. Ray2 remains one of the best models for how it interprets natural language. Unlike some other systems, in my side-by-side tests Luma’s models don’t require complex prompt engineering to get good results, which makes a huge difference when streamlining workflows. That’s a major advantage.

Luma keeps proving itself as a leading AI video tool, and with Ray2 Flash making generation cheaper and faster, it’s only getting better.

FLORA: A New AI Workflow Tool for Creative Professionals

The News: FLORA is a new AI-powered creative system designed to give professionals more speed, control, and collaboration in their workflows. FLORA moves beyond simple AI outputs by letting users create structured workflows. FLORA allows users to connect different AI tools via nodes (similar to ComfyUI) so they can refine, iterate, and scale their creative process.

The system also supports real-time collaboration. And users can build and share generative workflows inside the FLORA community, offering a way to learn from and build on existing processes.

Lab Notes: I recommend checking out an article by Reggie James, who has some interesting thoughts about this product. I’ll link it here.

There’s no shortage of ways to access AI models today. Personally, I’ve been liking the flexibility of running models on-demand via Replicate and Fal. But FLORA looks like a strong product, giving users an innovative way to structure and customize their own AI-powered workflows. It’s on my list to experiment with as soon as possible.

Roundup Within a Roundup: A Wave of New AI Model Updates This Week

The News: AI video tools are evolving at a relentless pace, and this week saw a flood of new releases. In addition to the Ray2 Flash update I already talked about, several companies have launched powerful model updates that push the boundaries of AI-generated video and imagery. Here’s a breakdown of the latest:

Tencent has released HunyuanVideo I2V, an open-source and open-access image-to-video model. The code is available on GitHub for anyone interested in experimenting with it.

Minimax (Hailuo) introduced Image-01, a text-to-image model that delivers cinematic-quality images at a fraction of the usual cost, 1/10 the price of comparable models. It’s now available via their API platform, with web and mobile applications on the way.

A new eye correction model has been launched, which could be particularly useful for video editing apps and AI avatar enhancements.

Pika has rolled out Pikadditions and Pikaswaps in 1080p, significantly improving resolution and precision for more realistic AI-generated video effects. The update is available now for Pika paid users and will soon be added to the app.

Runway released an update to its video-to-video restyle tool, allowing users to transform video clips using image references for more refined stylistic changes.

Also, LTX has a new video model.

Lab Notes: So many great models here. The advancements in AI video generation just keep coming, and the competition is pushing quality higher across the board.

Fidelity has been a challenge for AI-generated video, so seeing models improve in that area is important. Runway’s video-to-video restyle also stands out, I’ve seen some impressive results using image references to guide transformations, and it opens up a lot of possibilities for refining AI-generated content.

Recently of note, Chinese-based companies are making rapid advancements, launching highly capable yet affordable models. Between open-source initiatives like Wan2.1, HunyuanVideo and cost-efficient image gen alternatives like Image-01, accessibility to high-quality AI tools is expanding fast. There’s a lot to keep up with, but the worldwide competition with USA-based companies like Pika Labs, Luma Labs (both in Silicon Valley), and Runway (NYC) signals an accelerating race toward more powerful, flexible, and cost-effective AI media generation.

Get the latest updates and help support the lab’s ongoing research:

ElevenLabs Adds Speech Speed Control, Expanding Expressive Voice AI

The News: ElevenLabs has introduced voice speed controls across its platforms. This feature allows users to adjust pacing at the word level, giving more precise control over speech delivery and expressiveness.

Adjusting pacing is a major factor in how spoken content is perceived. The ability to slow down or speed up speech naturally, without distorting quality, opens up new possibilities for narration, character dialogue, and AI-generated voice performances.

Lab Notes: Just like with video and image models, there’s a growing number of great voice AI tools, but ElevenLabs is still leading the space in quality. Speed controls might sound like a small update, but they offer a level of precision that has been difficult to achieve in AI voice synthesis.

Most speech generation tools with a speed option speed up or slow down audio in a way that feels unnatural, but ElevenLabs seems to have implemented it in a way that mimics how people actually adjust their speaking pace. That’s a big deal. Natural pacing variation is key to making AI voices sound more human, and this feature makes their system even more powerful for creative workflows.

Additional Findings: Quick Updates on Important News and Workflows

Google’s Veo 2 sees major price drop on Replicate – Previously priced as high as $2.50 per second on some platforms, Veo 2 is now available for just $0.50 per second on Replicate, making high-quality AI video generation significantly more affordable.

Channel42 opens to the public with Pika 2.2 support – The AI video editing platform is now available to everyone and integrated (among other models) the Pika 2.2 video model, which enables multi-image video generation, morphing start and end frames, AI-based object swaps, and more.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Shannon Leonard
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More