Hi,
It can certainly be done. It currently uses the OpenAI API, which promises that the data isn’t kept or used for training.
That said, we’re considering simply booting up one of the open source models, and hosting our own, because then we know for a fact that the data stays inside our eco-system. It would also solve the problem that OpenAI is so desperate to be a crowd-pleaser, and so desperate to not offend anyone, that it’s quite bad for writing. You constantly run into an AI that has been trained to not say certain things. A self-hosted AI would work with you even if you’re doing saucy topics, which I think it appropriate for screenwriting. It would be a raw deal to run it locally. We’re still talking many gigabytes of GPU RAM, and no chance of running it on a non-gaming PC.
The AI in Causality is mostly used for brunt work, and that’s mostly where we want to keep it. And this is also what we want to expand on. We know that people in general are truly loving having synopsis and titles written automatically, and the idea is to do this more automatically, and in bulk, so you make a lot of changes, press an AI button, which figures out which summaries need to be rewritten and does them in bulk. What’s holding that back is a billing model, because we can no longer offer it for free.
We also know that context is everything for an AI, and AI programming agents are much smarter than the raw LLM simply because the developers have become better at staging precise context. We have that same ability in Causality, because we know a lot about tags and emotions, so we’re in a much better position to write a garbage draft as a starting point.
I don’t really respect AI output for creative writing. For the AI, emotions are things, and subtext is completely lost on it. There’s hilarious lack of nuance in the writing. I’ve seen AI writing like “do you really think I want to be spiraling right now?”. That’s dead in the water.
If this is just a matter of disabling the AI features so you don’t have the buttons at all, that can certainly be done. But the question is if you’re realistically in danger of pressing them by accident, or is it because you don’t want to be reminded of the possibility.
The comparison with Apple dialing AI back isn’t completely accurate, because whereas their AI was everywhere on your device, for us, it’s practically nowhere. There’s nobody listening or offering suggestions. You specifically press a button when you want some AI to happen, and outside of that, there’s no AI.
The question is, what is the real problem we’re trying to solve?