1. Environment
- OS:Ubuntu 22.04 LTS
- Kernel:5.15.0-60-generic
- CPU: Intel(R) Xeon(R) Gold 6278C CPU @ 2.60GHz, 8vCPUs
- GPU: 1 * NVIDIA V100-PCIe-32G
- RAM: 64GB
- system disk:512 GiB
- data disk: 2048 GiB
- Stable Diffusion WebUI version:0cc0ee1 (2023/2/20)
2. Download Stable Diffusion models
Go to HuggingFace or Civitai to find a model.
Use wget command to download the model
bash
wget https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5-pruned.ckpt
Here are some good models for your reference.
2.1. Realistic style models
Stable Diffusion, the original model published by CompVis and StabilityAI.
2.2. Anime style models
I would suggest you to start from "Anything" model if you want to draw anime artworks.
3. Install Stable Diffusion WebUI
3.1. Install dependencies
1.Install proprietary Nvidia drivers in order to use CUDA. Then reboot.
sudo apt update
sudo apt purge *nvidia*
# List available drivers for your GPU
ubuntu-drivers list
sudo apt install nvidia-driver-525
2.Follow the instructions on Nvidia Developers to install CUDA. Reboot again.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda-repo-ubuntu2204-12-1-local_12.1.0-530.30.02-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-12-1-local_12.1.0-530.30.02-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-12-1-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda
3.Verify the installation
|---------------------------|
| nvidia-smi nvcc --version |
4.Install Python, wget, git
|------------------------------------------------------------------|
| sudo apt install python3 python3-pip python3-virtualenv wget git |
5.Because we need a Pyhon 3.6 enviroment for SD WebUi, we have to install Anaconda
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| wgt https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh chmod +x Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh # Confirm everything and install it |
6.Create a virtual environment of Python 3.10.6
|-------------------------------------------|
| conda create --name sdwebui python=3.10.6 |
3.2. Clone Stable Diffusion WebUI repository
1.Clone the repository of Stable Diffusion WebUI
|----------------------------------------------------------------------------|
| cd ~ git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git |
2.Move .ckpt
models to stable-diffusion-webui
|-----------------------------------------------------------------------------------|
| mv ~/anything-v4.5-pruned.ckpt ~/stable-diffusion-webui/models/Stable-diffusion/ |
3.Enter the virtual enviroment
|------------------------|
| conda activate sdwebui |
4.If you want to activate virtual environment in a bash script, add these on the top of webui-user.sh
|--------------------------------------------------------|
| eval "$(conda shell.bash hook)" conda activate sdwebui |
3.3. Setup commandline arguments
According to Wiki,we have to change some commdanline arguments in order to start SD WebUI.
Edit webui-user.sh
|-------------------|
| vim webui-user.sh |
If the VRAM of GPU is lower than 4GB, add: COMMANDLINE_ARGS=--medvram --opt-split-attention
If your PC has RAM lower than 8GB, add: COMMANDLINE_ARGS=--lowvram --opt-split-attention
You could also add --listen
so you can access the WebUI from other PC on the same network. Or add --share
to generate a public Gradio link for accessing WebUI while deploying SD WebUI to servers.
3.4. Launch Stable Diffusion WebUI
1.Run webui.sh
, it will install all the dependencies. Then a link should pop up: http://127.0.0.1:7860
|------------|
| ./webui.sh |
2.To access WebUI from other PC on the same network,enter http://<IP of the PC>:7860
in the address bar of your browser. Don't forget to open firewall port
|-----------------------------------------|
| sudo ufw allow 7860/tcp sudo ufw reload |
3.5. How to update Stable Diffusion WebUI
1.Get current branch
|----------------------------|
| git rev-parse --short HEAD |
2.Pull latest files
|----------|
| git pull |
3.If something is broken after updating, roll back to the previous branch
|-----------------------|
| git checkout "branch" |
4. How to use Stable Diffusion WebUI
4.1. Prompts
Use "Prompts" and "Ngative Prompts" to tell AI what to draw.
See Vodly Artist name and Danbooru tags for choosing prompts.
For example, to draw Jeanne from Fate/Grand Order, we type the name of the character and characteristics of her body in the prompt fields.
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| jeanne d'arc from fate grand order, girl, (best quality), (masterpiece), (high detail), ((full face)), sharp, ((looking at viewer)), ((detailed pupils)), (thick thighs), (((full body))), (large breasts), vagina, nude, nipples |
Then type negative prompts.
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name |
4.2. Text to image
-
Go to SD WebUI, type the prompts
-
Check
Restore faces
-
Clcik
Generate
butoon, it will start generate a image -
You shall see the result at the right panel
All generated images will be stored at stable-diffusion-webui/outputs
You can also increase the value of Batch count
so it will generate multiple images in one run.
4.3. Imge to image
-
Type the prompts
-
Upload a image. Check
Restore faces
. ClcikGenerate
.
You can change the value of CFG Scale
and Denoising strength
. The lower the value of Denoising strength
is, the output would be more similar the original image.
Click Interrogate Deepboooru
to generate prompts automatically accodring to the image you uploaded.
References
- Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10684-10695).
- Stable Diffusion web UI Wiki - GitHub
- Voldy Retard Guide The definitive Stable Diffusion experience ™
- Install Stable Diffusion WebUI locally on Ubuntu Linux