Skip to content

Command Line Arguments and Settings

w-e-w edited this page Nov 2, 2024 · 41 revisions

Environment variables

NameDescription
PYTHONSets a custom path for Python executable.
VENV_DIRSpecifies the path for the virtual environment. Default is venv. Special value - runs the script without creating virtual environment.
COMMANDLINE_ARGSAdditional commandline arguments for the main program.
IGNORE_CMD_ARGS_ERRORSSet to anything to make the program not exit with an error if an unexpected commandline argument is encountered.
REQS_FILEName of requirements.txt file with dependencies that will be installed when launch.py is run. Defaults to requirements_versions.txt.
TORCH_COMMANDCommand for installing PyTorch.
INDEX_URL--index-url parameter for pip.
TRANSFORMERS_CACHEPath to where transformers library will download and keep its files related to the CLIP model.
CUDA_VISIBLE_DEVICESSelect GPU to use for your instance on a system with multiple GPUs. For example, if you want to use secondary GPU, put "1".
(add a new line to webui-user.bat not in COMMANDLINE_ARGS): set CUDA_VISIBLE_DEVICES=0
Alternatively, just use --device-id flag in COMMANDLINE_ARGS.
SD_WEBUI_LOG_LEVELLog verbosity. Supports any valid logging level supported by Python's built-in logging module. Defaults to INFO if not set.
SD_WEBUI_CACHE_FILECache file path. Defaults to cache.json in the root directory if not set.
SD_WEBUI_RESTARA value set by launcher script (like webui.bat webui.sh) informing Webui that restart function is available
SD_WEBUI_RESTARTINGA internal value signifying if webui is currently restarting or reloading, used for disabling certain actions asuch as auto launching browser.
set to 1 disables auto launching browser
set to 0 enables auto launch even when restarting
Certain extensions might use this value for similar purpose.

webui-user

The recommended way to specify environment variables is by editing webui-user.bat (Windows) and webui-user.sh (Linux):

  • set VARNAME=VALUE for Windows
  • export VARNAME="VALUE" for Linux

For example, in Windows:

set COMMANDLINE_ARGS=--xformers --skip-torch-cuda-test --no-half-vae --api --ckpt-dir A:\\stable-diffusion-checkpoints 

Running online

Use the --share option to run online. You will get a xxx.app.gradio link. This is the intended way to use the program in colabs. You may set up authentication for said gradio shared instance with the flag --gradio-auth username:password, optionally providing multiple sets of usernames and passwords separated by commas.

Running within Local Area Network

Use --listen to make the server listen to network connections. This will allow computers on the local network to access the UI, and if you configure port forwarding, also computers on the internet. Example address: http://192.168.1.3:7860 Where your "192.168.1.3" is the local IP address.

Use --port xxxx to make the server listen on a specific port, xxxx being the wanted port. Remember that all ports below 1024 need root/admin rights, for this reason it is advised to use a port above 1024. Defaults to port 7860 if available.

Running on CPU

Running with only your CPU is possible, but not recommended. It is very slow and there is no fp16 implementation.

To run, you must have all these flags enabled: --use-cpu all --precision full --no-half --skip-torch-cuda-test

Though this is a questionable way to run webui, due to the very slow generation speeds; using the various AI upscalers and captioning tools may be useful to some people.

Extras:

For the technically inclined, here are some steps a user provided to boost CPU performance:

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10514

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10516

All command line arguments

Argument CommandValueDefaultDescription
CONFIGURATION
-h, --helpNoneFalseShow this help message and exit.
--exitTerminate after installation
--data-dirDATA_DIR./base path where all user data is stored
--models-dirMODELSNonebase path where models are stored; overrides --data-dir
--configCONFIGconfigs/stable-diffusion/v1-inference.yamlPath to config which constructs model.
--ckptCKPTmodel.ckptPath to checkpoint of Stable Diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded.
--ckpt-dirCKPT_DIRNonePath to directory with Stable Diffusion checkpoints.
--no-download-sd-modelNoneFalseDon't download SD1.5 model even if no model is found.
--do-not-download-clipNoneFalsedo not download CLIP model even if it's not included in the checkpoint
--vae-dirVAE_PATHNonePath to Variational Autoencoders model
--vae-pathVAE_PATHNoneCheckpoint to use as VAE; setting this argument
--gfpgan-dirGFPGAN_DIRGFPGAN/GFPGAN directory.
--gfpgan-modelGFPGAN_MODELGFPGAN model file name.
--codeformer-models-pathCODEFORMER_MODELS_PATHmodels/Codeformer/Path to directory with codeformer model file(s).
--gfpgan-models-pathGFPGAN_MODELS_PATHmodels/GFPGANPath to directory with GFPGAN model file(s).
--esrgan-models-pathESRGAN_MODELS_PATHmodels/ESRGANPath to directory with ESRGAN model file(s).
--bsrgan-models-pathBSRGAN_MODELS_PATHmodels/BSRGANPath to directory with BSRGAN model file(s).
--realesrgan-models-pathREALESRGAN_MODELS_PATHmodels/RealESRGANPath to directory with RealESRGAN model file(s).
--scunet-models-pathSCUNET_MODELS_PATHmodels/ScuNETPath to directory with ScuNET model file(s).
--swinir-models-pathSWINIR_MODELS_PATHmodels/SwinIRPath to directory with SwinIR and SwinIR v2 model file(s).
--ldsr-models-pathLDSR_MODELS_PATHmodels/LDSRPath to directory with LDSR model file(s).
--dat-models-pathDAT__MODELS_PATHmodels/DATPath to directory with DAT model file(s).
--lora-dirLORA_DIRmodels/LoraPath to directory with Lora networks.
--clip-models-pathCLIP_MODELS_PATHNonePath to directory with CLIP model file(s).
--embeddings-dirEMBEDDINGS_DIRembeddings/Embeddings directory for textual inversion (default: embeddings).
--textual-inversion-templates-dirTEXTUAL_INVERSION_TEMPLATES_DIRtextual_inversion_templatesDirectory with textual inversion templates.
--hypernetwork-dirHYPERNETWORK_DIRmodels/hypernetworks/hypernetwork directory.
--localizations-dirLOCALIZATIONS_DIRlocalizations/Localizations directory.
--styles-fileSTYLES_FILEstyles.csvPath or wildcard path of styles files, allow multiple entries.
--ui-config-fileUI_CONFIG_FILEui-config.jsonFilename to use for UI configuration.
--no-progressbar-hidingNoneFalseDo not hide progress bar in gradio UI (we hide it because it slows down ML if you have hardware acceleration in browser).
--ui-settings-fileUI_SETTINGS_FILEconfig.jsonFilename to use for UI settings.
--allow-codeNoneFalseAllow custom script execution from web UI.
--shareNoneFalseUse share=True for gradio and make the UI accessible through their site.
--listenNoneFalseLaunch gradio with 0.0.0.0 as server name, allowing to respond to network requests.
--portPORT7860Launch gradio with given server port, you need root/admin rights for ports < 1024; defaults to 7860 if available.
--hide-ui-dir-configNoneFalseHide directory configuration from web UI.
--freeze-settingsNoneFalsedisable editing of all settings globally
--freeze-settings-in-sectionsNoneFalsedisable editing settings in specific sections of the settings page by specifying a comma-delimited list such like "saving-images,upscaling". The list of setting names can be found in the modules/shared_options.py file
--freeze-specific-settingsNoneFalsedisable editing of individual settings by specifying a comma-delimited list like "samples_save,samples_format". The list of setting names can be found in the config.json file
--enable-insecure-extension-accessNoneFalseEnable extensions tab regardless of other options.
--gradio-debugNoneFalseLaunch gradio with --debug option.
--gradio-authGRADIO_AUTHNoneSet gradio authentication like username:password; or comma-delimit multiple like u1:p1,u2:p2,u3:p3.
--gradio-auth-pathGRADIO_AUTH_PATHNoneSet gradio authentication file path ex. /path/to/auth/file same auth format as --gradio-auth.
--disable-console-progressbarsNoneFalseDo not output progress bars to console.
--enable-console-promptsNoneFalsePrint prompts to console when generating with txt2img and img2img.
--apiNoneFalseLaunch web UI with API.
--api-authAPI_AUTHNoneSet authentication for API like username:password; or comma-delimit multiple like u1:p1,u2:p2,u3:p3.
--api-logNoneFalseEnable logging of all API requests.
--nowebuiNoneFalseOnly launch the API, without the UI.
--ui-debug-modeNoneFalseDon't load model to quickly launch UI.
--device-idDEVICE_IDNoneSelect the default CUDA device to use (export CUDA_VISIBLE_DEVICES=0,1 etc might be needed before).
--administratorNoneFalseAdministrator privileges.
--cors-allow-originsCORS_ALLOW_ORIGINSNoneAllowed CORS origin(s) in the form of a comma-separated list (no spaces).
--cors-allow-origins-regexCORS_ALLOW_ORIGINS_REGEXNoneAllowed CORS origin(s) in the form of a single regular expression.
--tls-keyfileTLS_KEYFILENonePartially enables TLS, requires --tls-certfile to fully function.
--tls-certfileTLS_CERTFILENonePartially enables TLS, requires --tls-keyfile to fully function.
--disable-tls-verifyNoneFalseWhen passed, enables the use of self-signed certificates.
--subpathSERVER_SUB_PATHcustomize the subpath for gradio, use with reverse proxy
--server-nameSERVER_NAMENoneSets hostname of server.
--no-gradio-queueNoneFalseDisables gradio queue; causes the webpage to use http requests instead of websockets; was the default in earlier versions.
--gradio-allowed-pathNoneNoneAdd path to Gradio's allowed_paths; make it possible to serve files from it.
--no-hashingNoneFalseDisable SHA-256 hashing of checkpoints to help loading performance.
--skip-version-checkNoneFalseDo not check versions of torch and xformers.
--skip-python-version-checkNoneFalseDo not check versions of Python.
--skip-torch-cuda-testNoneFalseDo not check if CUDA is able to work properly.
--skip-installNoneFalseSkip installation of packages.
--loglevelNoneNonelog level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG
--log-startupNoneFalselaunch.py argument: print a detailed log of what's happening at startup
--api-server-stopNoneFalseenable server stop/restart/kill via api
--timeout-keep-aliveint30set timeout_keep_alive for uvicorn
PERFORMANCE
--xformersNoneFalseEnable xformers for cross attention layers.
--force-enable-xformersNoneFalseEnable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work.
--xformers-flash-attentionNoneFalseEnable xformers with Flash Attention to improve reproducibility (supported for SD2.x or variant only).
--opt-sdp-attentionNoneFalseEnable scaled dot product cross-attention layer optimization; requires PyTorch 2.*
--opt-sdp-no-mem-attentionFalseNoneEnable scaled dot product cross-attention layer optimization without memory efficient attention, makes image generation deterministic; requires PyTorch 2.*
--opt-split-attentionNoneFalseForce-enables Doggettx's cross-attention layer optimization. By default, it's on for CUDA-enabled systems.
--opt-split-attention-invokeaiNoneFalseForce-enables InvokeAI's cross-attention layer optimization. By default, it's on when CUDA is unavailable.
--opt-split-attention-v1NoneFalseEnable older version of split attention optimization that does not consume all VRAM available.
--opt-sub-quad-attentionNoneFalseEnable memory efficient sub-quadratic cross-attention layer optimization.
--sub-quad-q-chunk-sizeSUB_QUAD_Q_CHUNK_SIZE1024Query chunk size for the sub-quadratic cross-attention layer optimization to use.
--sub-quad-kv-chunk-sizeSUB_QUAD_KV_CHUNK_SIZENoneKV chunk size for the sub-quadratic cross-attention layer optimization to use.
--sub-quad-chunk-thresholdSUB_QUAD_CHUNK_THRESHOLDNoneThe percentage of VRAM threshold for the sub-quadratic cross-attention layer optimization to use chunking.
--opt-channelslastNoneFalseEnable alternative layout for 4d tensors, may result in faster inference only on Nvidia cards with Tensor cores (16xx and higher).
--disable-opt-split-attentionNoneFalseForce-disables cross-attention layer optimization.
--disable-nan-checkNoneFalseDo not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI.
--use-cpu{all, sd, interrogate, gfpgan, bsrgan, esrgan, scunet, codeformer}NoneUse CPU as torch device for specified modules.
--use-ipexNoneFalseUse Intel XPU as torch device
--no-halfNoneFalseDo not switch the model to 16-bit floats.
--precision{full, half, autocast}autocastEvaluate at this precision.
--no-half-vaeNoneFalseDo not switch the VAE model to 16-bit floats.
--upcast-samplingNoneFalseUpcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.
--medvramNoneFalseEnable Stable Diffusion model optimizations for sacrificing a some performance for low VRAM usage.
--medvram-sdxlNoneFalseenable --medvram optimization just for SDXL models
--lowvramNoneFalseEnable Stable Diffusion model optimizations for sacrificing a lot of speed for very low VRAM usage.
--lowramNoneFalseLoad Stable Diffusion checkpoint weights to VRAM instead of RAM.
--disable-model-loading-ram-optimizationNoneFalsedisable an optimization that reduces RAM use when loading a model
FEATURES
--autolaunchNoneFalseOpen the web UI URL in the system's default browser upon launch.
--themeNoneUnsetOpen the web UI with the specified theme (light or dark). If not specified, uses the default browser theme.
--use-textbox-seedNoneFalseUse textbox for seeds in UI (no up/down, but possible to input long seeds).
--disable-safe-unpickleNoneFalseDisable checking PyTorch models for malicious code.
--ngrokNGROKNonengrok authtoken, alternative to gradio --share.
--ngrok-regionNGROK_REGIONusThe region in which ngrok should start.
--ngrok-optionsNGROK_OPTIONSNoneThe options to pass to ngrok in JSON format, e.g.: {"authtoken_from_env":true, "basic_auth":"user:password", "oauth_provider":"google", "oauth_allow_emails":"user@asdf.com"}
--update-checkNoneNoneOn startup, notifies whether or not your web UI version (commit) is up-to-date with the current master branch.
--update-all-extensionsNoneNoneOn startup, it pulls the latest updates for all extensions you have installed.
--reinstall-xformersNoneFalseForce-reinstall xformers. Useful for upgrading - but remove it after upgrading or you'll reinstall xformers perpetually.
--reinstall-torchNoneFalseForce-reinstall torch. Useful for upgrading - but remove it after upgrading or you'll reinstall torch perpetually.
--testsTESTSFalseRun test to validate web UI functionality, see wiki topic for more details.
--no-testsNoneFalseDo not run tests even if --tests option is specified.
--dump-sysinfoNoneFalselaunch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit
--disable-all-extensionsNoneFalsedisable all non-built-in extensions from running
--disable-extra-extensionsNoneFalsedisable all extensions from running
--skip-load-model-at-startNoneFalseif load a model at web start, only take effect when --nowebui
--unix-filenames-sanitizationNoneFalseallow any symbols except '/' in filenames. May conflict with your browser and file system
--filenames-max-lengthint128maximal length of filenames of saved images, longer filenames will be truncated. if overridden it can potentially cause issues with the file system
--no-prompt-historyNoneFalsedisable read prompt from last generation feature; disables --data-path/params.txt
DEFUNCT OPTIONS
--show-negative-promptNoneFalseNo longer has an effect.
--deepdanbooruNoneFalseNo longer has an effect.
--unload-gfpganNoneFalseNo longer has an effect.
--gradio-img2img-toolGRADIO_IMG2IMG_TOOLNoneNo longer has an effect.
--gradio-inpaint-toolGRADIO_INPAINT_TOOLNoneNo longer has an effect.
--gradio-queueNoneFalseNo longer has an effect.
--add-stop-routeNoneFalseNo longer has an effect.
--always-batch-cond-uncondNoneFalseNo longer has an effect, move into UI under Setting > Optimizations
--max-batch-countMAX_BATCH_COUNT16No longer has an effect. moved to ui-config.jsontxt2img/Batch count/maximumimg2img/Batch count/maximumUser-Interface-Customizations
Clone this wiki locally
close