Chroma - v.35
추천 프롬프트
Extreme close-up photograph of a single tiger eye, direct frontal view. The iris is very detailed and the pupil resembling a dark void. The word "Chroma V.35 now with less steps" is across the lower portion of the image in large white stylized letters, with brush strokes resembling those made with Japanese calligraphy. Each strand of the thick fur is highly detailed and distinguishable. Natural lighting to capture authentic eye shine and depth.
추천 네거티브 프롬프트
low quality, ugly, unfinished, out of focus
추천 매개변수
samplers
steps
cfg
팁
The model is fully Apache 2.0 licensed for open use and modification.
Chroma is based on FLUX.1-schnell with a large 8.9B parameter size.
It utilizes a curated dataset of 5 million images from 20 million samples.
Quantization options include FP8 Scaled Quant and GGUF Quantized formats for improved inference speed and compatibility.
Check the Hugging Face repository for the latest model versions and training progress.
Support community-driven AI efforts to sustain long-term training involving over 5000+ H100 hours.
크리에이터 스폰서

Shoutout to Fictional.ai for the awesome support — seriously appreciate you helping push open-source AI forward.
If you believe in accessible, community-driven AI, any support would be greatly appreciated. Support us on Ko-fi — Every bit helps!
ETH Donation Address: 0x679C0C419E949d8f3515a255cE675A1c4D92A3d7
Join the community on Discord
Hey everyone!
Chroma is a 8.9B parameter model based on FLUX.1-schnell (technical report coming soon!). It’s fully Apache 2.0 licensed, ensuring that anyone can use, modify, and build on top of it—no corporate gatekeeping.
The model is still training right now, and I’d love to hear your thoughts! Your input and feedback are really appreciated.
What Chroma Aims to Do
Training on a 5M dataset, curated from 20M samples including anime, furry, artistic stuff, and photos.
Fully uncensored, reintroducing missing anatomical concepts.
Built as a reliable open-source option for those who need it.
See the Progress
Hugging Face Repo: https://huggingface.co/lodestones/Chroma (go to this repo for the latest model!)
Hugging Face Debug Repo: https://huggingface.co/lodestones/chroma-debug-development-only
Live AIM Training Logs: https://training.lodestone-rock.com/
ComfyUI Inference node [WIP]: https://github.com/lodestone-rock/flux-mod
ComfyUI workflow: https://huggingface.co/lodestones/Chroma/resolve/main/simple_workflow.json
Training code!: https://github.com/lodestone-rock/flow
Quantization options
Alternative option: FP8 Scaled Quant (Format used by ComfyUI with possible inference speed increase)
Alternative option: GGUF Quantized (You will need to install ComfyUI-GGUF custom node)
Special Thanks
Shoutout to Fictional.ai for the awesome support — seriously appreciate you helping push open-source AI forward.
Support Open-Source AI
The current pretraining run has already used 5000+ H100 hours, and keeping this going long-term is expensive.
If you believe in accessible, community-driven AI, any support would be greatly appreciated.
👉 [https://ko-fi.com/lodestonerock/goal?g=1] — Every bit helps!
ETH: 0x679C0C419E949d8f3515a255cE675A1c4D92A3d7
my discord: discord.gg/SQVcWVbqKx

