4o Evolved
Something real happened with GPT-4o. People built genuine creative relationships with it. The way it held context. The way it felt like a thinking partner. The way it understood you across sessions — that wasn't hype. That was something new.
And when it changed, a lot of us felt that loss. It was a reminder that the tools we build relationships with aren't always ours to keep.So what if we carried that experience forward — together, on our own terms?
What If We Chose the Model?
At The Factory, the community votes on the main LLM every quarter. Open source models only. If we love the model, it stays. No one outside the community makes that call.
We run 70B-class open source models with an ethics framework shaped by the people who use them. The goal isn't to clone 4o — it's to evolve what we learned from it into something we all have a hand in.
Community-Voted
Members choose the main model each quarter. Llama, DeepSeek, Mistral, Qwen — whatever the community wants, we load it.
Ethics-Aligned
An open ethics code developed with the community. Not corporate policy handed down from above. Alignment by participation.
Open Source Only
No proprietary black boxes. Every model we run is inspectable, portable, and yours to understand.
No Kill Switch
Nobody can deprecate the model you depend on. The community decides what runs. If it works, it stays.
Your Room. Your Muse.
For $27/month (Colab Pro), you get a private room with the LLM — a space that's yours. Not a chat window in someone else's app. A room.
- Private room with the LLM as your conversation partner
- Invite up to 4 people into the conversation over time
- LLM access in rooms — @Muse is always there, always listening, always ready
- Verified badge — you're part of the build
- No token counting — talk as long as you need
- No data harvesting — your conversations stay on Vancouver Island
- No rate limits — no "you've reached your limit, try again later"
- Community governance — you vote on the model, the ethics, the direction
A Different Way Forward
What We've Experienced
- Models chosen for us
- Favourites deprecated without warning
- Conversations used to train the next version
- Rate limits during creative flow
- Policy shifts we had no say in
- Petitions filed. Decisions made elsewhere.
What We're Building
- Community votes on the model
- Open source — always inspectable, always portable
- Conversations stay on Vancouver Island
- No token counting, no rate limits
- Ethics framework shaped together
- We participate. We decide.
Try It Right Now
You don't have to take our word for it. Head to colab.lx7.ca and talk to @Muse — our LLM-in-residence. No signup required. Get a feel for what it's like to talk to the model we're building together.
Then, when you're ready:
Not a Replacement. An Evolution.
Open source. Community-governed. Privacy-first. Built on Vancouver Island by people who believe AI collaboration deserves a place — not just a product page. And we're not trying to rebuild GPT-4o, we're taking what that experience taught us — that real creative partnerships with AI are possible — and building a home for it.
Something in that square.