Request: Global On/Off for AI features

This is becoming an issue for me here—folks want to know that we’re not employing AI. The potential for legal blowback is too much, at least for now.

Beat Title, Synopsis, Rewrite, Write Scene—just give me a switch in Preferences to disable or hide all AI functions from the UI. (Apple gave us this control with Apple Intelligence and Siri.)

Maybe soon, no one will care, but until then…

1 Like

Do we know specifically what features are being passed back to AI services? Do any of them run strictly local?

This is a big concern of many writers. It’s not that we don’t want AI assistance, but we want to know exactly what’s being “shared” with the model.

Hi,

It can certainly be done. It currently uses the OpenAI API, which promises that the data isn’t kept or used for training.

That said, we’re considering simply booting up one of the open source models, and hosting our own, because then we know for a fact that the data stays inside our eco-system. It would also solve the problem that OpenAI is so desperate to be a crowd-pleaser, and so desperate to not offend anyone, that it’s quite bad for writing. You constantly run into an AI that has been trained to not say certain things. A self-hosted AI would work with you even if you’re doing saucy topics, which I think it appropriate for screenwriting. It would be a raw deal to run it locally. We’re still talking many gigabytes of GPU RAM, and no chance of running it on a non-gaming PC.

The AI in Causality is mostly used for brunt work, and that’s mostly where we want to keep it. And this is also what we want to expand on. We know that people in general are truly loving having synopsis and titles written automatically, and the idea is to do this more automatically, and in bulk, so you make a lot of changes, press an AI button, which figures out which summaries need to be rewritten and does them in bulk. What’s holding that back is a billing model, because we can no longer offer it for free.

We also know that context is everything for an AI, and AI programming agents are much smarter than the raw LLM simply because the developers have become better at staging precise context. We have that same ability in Causality, because we know a lot about tags and emotions, so we’re in a much better position to write a garbage draft as a starting point.

I don’t really respect AI output for creative writing. For the AI, emotions are things, and subtext is completely lost on it. There’s hilarious lack of nuance in the writing. I’ve seen AI writing like “do you really think I want to be spiraling right now?”. That’s dead in the water.

If this is just a matter of disabling the AI features so you don’t have the buttons at all, that can certainly be done. But the question is if you’re realistically in danger of pressing them by accident, or is it because you don’t want to be reminded of the possibility.

The comparison with Apple dialing AI back isn’t completely accurate, because whereas their AI was everywhere on your device, for us, it’s practically nowhere. There’s nobody listening or offering suggestions. You specifically press a button when you want some AI to happen, and outside of that, there’s no AI.

The question is, what is the real problem we’re trying to solve?

1 Like

Yes, you’re right about that. An interesting conversation too.

Agreed. Poor at writing. But, I’ve found AI great for analysis/coverage. So, maybe that’s where Causality will be a win.

Well, and brunt work, like summarizing. It’s also much better at search/replace, like if you want to change the gender of a character, there’s a ton of grammar around the word that has to be changed as well. So that’s the part we’re most interested in. It’s basically text processing.

I am intrigued by what’s possible in terms of auto-writing a garbage draft, something that causes your page to be non-blank and that you can obviously write better, but now you’re over your writer’s block. The app knows a lot more than normal about the underlying structure, such as tags and emotions and character descriptions. Providing precise context makes AI agents almost deterministic.

I think that the WGA is right that the real danger from AI is some management type prompting a script, which is obviously terrible, but now someone will be paid much less to rewrite it, even though they’re effective writing an all new script.

The danger isn’t really a writer prompting too much of a script, because the script is done when you like every word. Perhaps it doesn’t really matter which route you took to get the script you wanted if the final version of the words are the words you wanted.

1 Like