What's new
AI Arts Forum :: Talking about Generative AI Art Tools

Whether you're already in love with Generative AI, just getting started, or wanting to learn more, you're in the right place!

Register for a FREE account today and join the conversation.

How to Fine-Tune Pre-Trained Stable Diffusion Models Using Custom Images

Aditya Jhaveri

New member
Problem statement:
I am utilizing Stable Diffusion XL Base 1.0 for image generation, but it does not accept my custom input image. I would like to generate a new image based on my input image and the specified prompt.

Description:

I need a solution to generate anthropomorphic pet portraits from user-uploaded pet photos. Specifically, I want the AI to use the pet's face from the uploaded image and create the rest of the body based on a given prompt, such as a king, doctor, or lawyer. The generated portraits should retain the pet's unique features, making it clear they are the same pets.

The problem I'm encountering is that when I input a custom pet image into the pre-trained Stable Diffusion model with an anthropomorphic prompt, the model generates an image based on its dataset instead of using my custom image. I want the AI to generate new images using the provided pet photos, incorporating the given prompt, rather than creating random images from its own dataset.

How can I fine-tune a pre-trained Stable Diffusion model, or any relevant model, with our custom images so that it uses these images to generate new portraits according to the given input and prompt?



1718254808979.jpeg
 
Last edited:
Back