Comments

Log in with itch.io to leave a comment.

Viewing most recent comments 21 to 60 of 180 · Next page · Previous page · First page · Last page
(+1)

Is there any chance you can make the 2.3.5 version available again?

(-1)

Always loading...

3900x

2070super

64G 

SSD 4T

windos11


(+1)

Is it possible to allow for older version downloads, I would like to use the 2.3.5 as I keep running into issues with merging my diffuser models (exception in ASGI application) until 3.0 stabilises.

I sadly don't have it uploaded anymore, because the space on my server is limited. :(

If you send me some google drive or dropbox link (via DM), then I can upload it there. :) Should have around 10 GB of free space.

Deleted 222 days ago

when we install from itch, is the install file saved after install, in case an update corrects it and you need to reinstall?

I think i'm kinda blind, or does the new version no longer has a high res fix.

(+2)

Yes, no high res fix as well as no face restore. And the SDXL models need 3x time to render. I stay with 2.3.5 till 3.0.X is in a good condition.

update.bat (yes the fix version) ruined my install.. I´ll just wait for 3.1 instead and download the whole package again then...

(+1)

While running the upate.bat file an error occurs 

Loading...

Traceback (most recent call last):

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\runpy.py", line 196, in _run_module_as_main

    return _run_code(code, main_globals, None,

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\runpy.py", line 86, in _run_code

    exec(code, run_globals)

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\Scripts\invokeai-web.exe\__main__.py", line 4, in <module>

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\api_app.py", line 22, in <module>

    from ..backend.util.logging import InvokeAILogger

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\__init__.py", line 4, in <module>

    from .generator import InvokeAIGeneratorBasicParams, InvokeAIGenerator, InvokeAIGeneratorOutput, Img2Img, Inpaint

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\generator\__init__.py", line 4, in <module>

    from .base import (

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\generator\base.py", line 9, in <module>

    import diffusers

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\__init__.py", line 3, in <module>

    from .configuration_utils import ConfigMixin

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\configuration_utils.py", line 34, in <module>

    from .utils import (

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\utils\__init__.py", line 21, in <module>

    from .accelerate_utils import apply_forward_hook

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\utils\accelerate_utils.py", line 24, in <module>

    import accelerate

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\__init__.py", line 3, in <module>

    from .accelerator import Accelerator

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\accelerator.py", line 35, in <module>

    from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\checkpointing.py", line 24, in <module>

    from .utils import (

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\utils\__init__.py", line 133, in <module>

    from .launch import (

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\utils\launch.py", line 23, in <module>

    from ..commands.config.config_args import SageMakerConfig

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\commands\config\__init__.py", line 19, in <module>

    from .config import config_command_parser

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\commands\config\config.py", line 25, in <module>

    from .sagemaker import get_sagemaker_input

  File "D:\Deep Fusion\invokeai\invokeai3_standalone\env\lib\site-packages\accelerate\commands\config\sagemaker.py", line 35, in <module>

    import boto3  # noqa: F401

  File "C:\Users\jaysh\AppData\Roaming\Python\Python310\site-packages\boto3\__init__.py", line 17, in <module>

    from boto3.session import Session

  File "C:\Users\jaysh\AppData\Roaming\Python\Python310\site-packages\boto3\session.py", line 17, in <module>

    import botocore.session

  File "C:\Users\jaysh\AppData\Roaming\Python\Python310\site-packages\botocore\session.py", line 26, in <module>

    import botocore.client

  File "C:\Users\jaysh\AppData\Roaming\Python\Python310\site-packages\botocore\client.py", line 15, in <module>

    from botocore import waiter, xform_name

  File "C:\Users\jaysh\AppData\Roaming\Python\Python310\site-packages\botocore\waiter.py", line 16, in <module>

    import jmespath

ModuleNotFoundError: No module named 'jmespath'

I have no idea what this is? Please help.. 

encountering this error when using a custom model from civitai.com


[2023-08-11 03:51:29,258]::[InvokeAI]::ERROR --> Traceback (most recent call last):

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\services\processor.py", line 86, in __process

    outputs = invocation.invoke(

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

    return func(*args, **kwargs)

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\compel.py", line 81, in invoke

    tokenizer_info = context.services.model_manager.get_model(

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\services\model_manager_service.py", line 364, in get_model

    model_info = self.mgr.get_model(

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\model_management\model_manager.py", line 484, in get_model

    model_path = model_class.convert_if_required(

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 123, in convert_if_required

    return _convert_ckpt_and_cache(

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 283, in _convert_ckpt_and_cache

    convert_ckpt_to_diffusers(

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1740, in convert_ckpt_to_diffusers

    pipe = download_from_original_stable_diffusion_ckpt(checkpoint_path, **kwargs)

  File "C:\Users\crdbrdmsk\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1257, in download_from_original_stable_diffusion_ckpt

    logger.debug(f"original checkpoint precision == {checkpoint[precision_probing_key].dtype}")

KeyError: 'model.diffusion_model.input_blocks.0.0.bias'

[2023-08-11 03:51:29,263]::[InvokeAI]::ERROR --> Error while invoking:

'model.diffusion_model.input_blocks.0.0.bias'


any fix for this? first time using this ai

I can't find the "force image" slider option in  "image to image" in the  NEW SDXL invokeai3_standalone, It's still there Or am I completely dumb to find it? :(

I think it's called "Denoising Strength". And yeah, that's a really confusing name. :/

Confirmed, "Denoising Strength" is now force image. in a happy note, im not dumb  ;D

(1 edit)

cant load up invokeai at all, tried the patch as well as renaming python folder

edit: this shows when launching the commandline.bat and helper.bat
the update.bat just closes instantly

Official standalone.

Loading...

Traceback (most recent call last):

  File "Q:\Invoke\AI\invokeai3_standalone\env\lib\runpy.py", line 196, in _run_module_as_main

    return _run_code(code, main_globals, None,

  File "Q:\Invoke\AI\invokeai3_standalone\env\lib\runpy.py", line 86, in _run_code

    exec(code, run_globals)

  File "Q:\Invoke\AI\invokeai3_standalone\env\Scripts\invokeai-web.exe\__main__.py", line 4, in <module>

ModuleNotFoundError: No module named 'invokeai.app'

Press any key to continue

Thanks for reaching out! :)

Did you try to start the starter (invokeai_starter.exe) before that? The starter sets some required variables first.

yes, it was the first thing i tried, thats when i first saw the error, sorry for not clarifying.

Sunija, thanks a lot for the new upload!

Hi. Can I ask of you, if it's not too much trouble and due to how absolutely garbage my internet connection is, to upload either a far more compressed version of invokeai3_standalone.7z (13.5GB, v3.0.0), so it's size approaches that of the previous 8.15GB zip, or a several parts compressed set of both this 7z file and [NEW SDXL] invokeai3_standalone.7z (26 GB, v3.0.1), or torrents for them both, and/or a list of compounding files (i.e: github repositories, huggingface/civitai models, etc..) as to facilitate assemblage of a standalone?

There are many Youtube Tutorials for downloading and assemblage of all required files.

Any good recommendation?

https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/ or 

Thanks for reaching out!

If you use the itch.io/app you might be able to download the file in parts. I think. :X

I *could* maybe make a "minimal version" that does not contain the control net models and the SDXL models. Then we can go down to 8 GB again. InvokeAI contains a downloader (it's in the commandline, but kinda usable) so you could download the models after that.

Oh, no, no, I was talking about which models did you built the standalone with (and any other libraries or such requirements) since, let's just say, anything that requires a download of stuff to "install" itself is a big no-no in my setup, so I want to assemble a "ready to build completely offline" standalone.

I'm not sure if I understood you completely yet. ^^'

But on the github page of my standalone, there's an instruction on how to create the standalone yourself. It's a bit more "techy", because it requires you to do stuff in the command line. I can send you the compiled starter and batch files via Discord (Sunija#6598), I think I didn't add those to the repo yet. :X

You can find the github page here. Is that what you need?

Can we please get an update!

(+1)(-1)

It's uploading and will be available in 2.5h here. :)
(If you start downloading before that, you'll get an incomplete file.)

(-1)

Thank you for the response I thought I was getting ignored all this time. 

Jesus, 26gbs? :P Any easy update files? I don´t need the XDSL checkpoints, have them :)

I cannot use img2img and unified canvas. 

[2023-07-30 10:33:23,764]::[InvokeAI]::ERROR --> Traceback (most recent call last):

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\services\processor.py", line 70, in __process

    outputs = invocation.invoke(

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

    return func(*args, **kwargs)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\latent.py", line 762, in invoke

    image_tensor_dist = vae.encode(image_tensor).latent_dist

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper

    return method(self, *args, **kwargs)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\models\autoencoder_kl.py", line 236, in encode

    h = self.encoder(x)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\models\vae.py", line 139, in forward

    sample = down_block(sample)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1150, in forward

    hidden_states = resnet(hidden_states, temb=None)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\models\resnet.py", line 596, in forward

    hidden_states = self.norm1(hidden_states)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward

    return F.group_norm(

  File "C:\Users\Irfan\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm

    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 4.00 GiB total capacity; 2.95 GiB already allocated; 0 bytes free; 3.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

You are running out of memory. Maybe try to make smaller images first, then try to increase the size until it crashes, so you know the limit of your graphics card. :3

A matching Triton is not available, some optimizations will not be enabled.

Error caught was: No module named 'triton'


what is this?

from [NEW] invokeai3_standalone.7z

False error. :)
Triton is only available on Linux, so this error should not even appear on Windows.

I have the same error. Windows 10 System.

Will this be updated to the latest version 3.0.1?

this please. SDXL in regular UI is a must.

Deleted 33 days ago
Deleted 33 days ago
(1 edit) (+1)

So I wonder how long this would take on a GTX 750, XD

(+2)

I don't know about GTX 750, but I tried GTX 750 TI 4GB (750 TI has 2 GB models I believe). It's slow, but not dumb slow. Desc said "6s per image on a RTX 3060.", my 750 TI took 20s-ish.

(1 edit) (-1)

this one keep giving me a black screen.


A matching Triton is not available, some optimizations will not be enabled.

Error caught was: No module named 'triton'

[2023-07-24 23:48:45,044]::[InvokeAI]::INFO --> Patchmatch initialized

C:\Users\thanh\Downloads\invokeai3_standalone\invokeai3_standalone\env\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.

  warnings.warn(

INFO:     Started server process [9388]

INFO:     Waiting for application startup.

[2023-07-24 23:48:47,863]::[InvokeAI]::INFO --> InvokeAI version 3.0.0

[2023-07-24 23:48:47,863]::[InvokeAI]::INFO --> Root directory = C:\Users\thanh\Downloads\invokeai3_standalone\invokeai3_standalone\invokeai

[2023-07-24 23:48:47,868]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 3060 Laptop GPU

[2023-07-24 23:48:47,872]::[InvokeAI]::INFO --> Scanning C:\Users\thanh\Downloads\invokeai3_standalone\invokeai3_standalone\invokeai\models for new models

[2023-07-24 23:48:48,403]::[InvokeAI]::INFO --> Scanned 29 files and directories, imported 0 models

[2023-07-24 23:48:48,406]::[InvokeAI]::INFO --> Model manager service initialized

INFO:     Application startup complete.

INFO:     Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)

INFO:     127.0.0.1:56147 - "GET /api/v1/app/version HTTP/1.1" 200 OK

INFO:     127.0.0.1:56154 - "GET /socket.io/?EIO=4&transport=polling&t=Oc8pRcY HTTP/1.1" 200 OK

INFO:     127.0.0.1:56154 - "GET /api/v1/app/version HTTP/1.1" 200 OK

INFO:     127.0.0.1:56154 - "GET /api/v1/models/?model_type=embedding HTTP/1.1" 200 OK

INFO:     127.0.0.1:56154 - "GET /api/v1/boards/?all=true HTTP/1.1" 200 OK

INFO:     127.0.0.1:56154 - "GET /api/v1/models/?model_type=main HTTP/1.1" 200 OK

INFO:     127.0.0.1:56156 - "GET /api/v1/models/?model_type=vae HTTP/1.1" 200 OK

INFO:     127.0.0.1:56157 - "GET /api/v1/models/?model_type=controlnet HTTP/1.1" 200 OK

INFO:     127.0.0.1:56158 - "GET /api/v1/models/?model_type=lora HTTP/1.1" 200 OK

INFO:     127.0.0.1:56154 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=100&offset=0 HTTP/1.1" 200 OK

INFO:     127.0.0.1:56159 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200 OK

INFO:     127.0.0.1:56160 - "GET /api/v1/images/?board_id=none&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200 OK

INFO:     127.0.0.1:56156 - "POST /socket.io/?EIO=4&transport=polling&t=Oc8pRhR&sid=k1xjfLLwYfmEC1VQAAAA HTTP/1.1" 200 OK

INFO:     127.0.0.1:56157 - "GET /socket.io/?EIO=4&transport=polling&t=Oc8pRhS&sid=k1xjfLLwYfmEC1VQAAAA HTTP/1.1" 200 OK

INFO:     127.0.0.1:56157 - "GET /openapi.json HTTP/1.1" 200 OK

INFO:     ('127.0.0.1', 56161) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=k1xjfLLwYfmEC1VQAAAA" [accepted]

INFO:     127.0.0.1:56160 - "GET /socket.io/?EIO=4&transport=polling&t=Oc8pRsp HTTP/1.1" 200 OK

INFO:     connection open

INFO:     127.0.0.1:56156 - "GET /socket.io/?EIO=4&transport=polling&t=Oc8pRi5&sid=k1xjfLLwYfmEC1VQAAAA HTTP/1.1" 200 OK

INFO:     127.0.0.1:56160 - "POST /socket.io/?EIO=4&transport=polling&t=Oc8pRw1&sid=46lPMrt7ODDLvp1qAAAC HTTP/1.1" 200 OK

INFO:     127.0.0.1:56159 - "GET /socket.io/?EIO=4&transport=polling&t=Oc8pRw2&sid=46lPMrt7ODDLvp1qAAAC HTTP/1.1" 200 OK

INFO:     ('127.0.0.1', 56163) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=46lPMrt7ODDLvp1qAAAC" [accepted]

INFO:     connection open

INFO:     127.0.0.1:56160 - "GET /socket.io/?EIO=4&transport=polling&t=Oc8pRwN&sid=46lPMrt7ODDLvp1qAAAC HTTP/1.1" 200 OK

(1 edit) (+1)

Thanks for reaching out!

You have to open the page, and then delete the local storage (explained here).

Deleted 298 days ago

thnk for your respond but I think you sent the wrong link

(+1)

Yes, that was definitely the wrong link. >.< Thanks for the correction! I switched out the link.

How do update from 2.3.5 to 3.0.0?

Or is it impossible?

Sadly not (yet), there were too many changes. :(
You'll have to download the new version.

Hi, I have version 2.3.2 installed. Should I install the new version over the old one?

don´t get what to do...


A matching Triton is not available, some optimizations will not be enabled.

Error caught was: No module named 'triton'

Traceback (most recent call last):

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\runpy.py", line 196, in _run_module_as_main

    return _run_code(code, main_globals, None,

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\runpy.py", line 86, in _run_code

    exec(code, run_globals)

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\Scripts\invokeai-web.exe\__main__.py", line 4, in <module>

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\api_app.py", line 36, in <module>

    from .api.dependencies import ApiDependencies

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\api\dependencies.py", line 16, in <module>

    from invokeai.app.services.images import ImageService, ImageServiceDependencies

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\services\images.py", line 8, in <module>

    from invokeai.app.invocations.metadata import ImageMetadata

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\metadata.py", line 8, in <module>

    from invokeai.app.invocations.controlnet_image_processors import ControlField

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\controlnet_image_processors.py", line 23, in <module>

    from .image import ImageOutput, PILInvocationConfig

  File "C:\Users\Ape\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\image.py", line 458, in <module>

    "nearest": Image.Resampling.NEAREST,

  File "C:\Users\Ape\AppData\Roaming\Python\Python310\site-packages\PIL\Image.py", line 65, in __getattr__

    raise AttributeError(f"module '{__name__}' has no attribute '{name}'")

AttributeError: module 'PIL.Image' has no attribute 'Resampling'

Press any key to continue . . .

Can you go to "C:\Users\Ape\AppData\Roaming\" and rename the "Python" folder to "_Python", and then see if it works?

As background info:
For some reason (that I couldn't find out yet), parts of the standalone will not use standalone-code but python code that is already installed on your PC from other python programs. Renaming that folder will make it not find the wrong code and use the right one.
That's not a great long-term solution though, because it might mess up other python programs that you installed previously. :/

(+1)

That worked! It still complains about Triton, but from what I´ve read it doesn´t really matter, and now it proceeds to the webui, and works fine :)

(+1)

LORA models load very slowly before each invoke, the more you have the longer it takes, but that's probably not the fault of this standalone, just a warning for people using them. Maybe a bug, because A1111 and ComfyUI load them quickly.

Thanks for the report! :)
Do you maybe have comparison numbers (invoke with x loras, vs comfyui with x loras), so I can send that to the team?

I don't have exact numbers to give, but for me invokeai3 takes more than ten seconds before each invoke starts processing, even with just one LORA. In ComfyUI it seems to be instant with 10 LORAs. My CPU is weak, so that might be part of the reason, but there might be some optimisations to be done, because I don't have any problems with ComfyUI. Maybe the problem is that invokeai doesn't keep LORA's in memory and therefore loads them every time, whereas ComfyUI does. I have no idea and I'm not an expert, just a guess.

(+1)

Controlnet models are missing from the invokeai3 standalone. This is unfortunate as they are one of the main features. Portable standalone should not need internet for extra downloads. Please include them so that it can be used in an offline environment. Thanks for your work.

It contains the 4 most used controlnet models (pose, canny, etc). You can easily download more by clicking "Start model installer" in the starter. I left the others out, because that would increase the download by 10 GB.

But I can also make an optional "all included" version. :)

Yes, it would be great to have a build that has everything, that stands out from the alternatives. I'm looking forward to that.

Canny is the only controlnet model that sort of works (except with inpainting). The other controlnet models

give errors in both the UI and the console. Something about controlnet processors is missing. Here is an example of an attempt to use OpenPose:

[InvokeAI]::ERROR --> Traceback (most recent call last):

  File "C:\Users\user\invokeai3_standalone\env\lib\site-packages\invokeai\app\services\processor.py", line 70, in __process

    outputs = invocation.invoke(

  File "C:\Users\user\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\controlnet_image_processors.py", line 221, in invoke

    processed_image = self.run_processor(raw_image)

  File "C:\Users\user\invokeai3_standalone\env\lib\site-packages\invokeai\app\invocations\controlnet_image_processors.py", line 385, in run_processor

    openpose_processor = OpenposeDetector.from_pretrained(

  File "C:\Users\user\invokeai3_standalone\env\lib\site-packages\controlnet_aux\open_pose\__init__.py", line 103, in from_pretrained

    body_model_path = hf_hub_download(pretrained_model_or_path, filename, cache_dir=cache_dir)

  File "C:\Users\user\invokeai3_standalone\env\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn

    return fn(*args, **kwargs)

  File "C:\Users\user\invokeai3_standalone\env\lib\site-packages\huggingface_hub\file_download.py", line 1291, in hf_hub_download

    raise LocalEntryNotFoundError(

huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.

I tried downloading the new update you posted. It dosn't seem to work. I'll get this error *ModuleNotFoundError: No module named 'filelock'* when launching Invoke AI. I have tried restarting my PC, also completely deleted InvokeAI and doing a fresh install, but the error persist.

Here is the full transcript (edited the username to say PC instead for obisous reasons).

Official standalone.

Loading...

Traceback (most recent call last):

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\runpy.py", line 196, in _run_module_as_main

    return _run_code(code, main_globals, None,

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\runpy.py", line 86, in _run_code

    exec(code, run_globals)

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\Scripts\invokeai-web.exe\__main__.py", line 4, in <module>

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\app\api_app.py", line 21, in <module>

    from ..backend.util.logging import InvokeAILogger

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\__init__.py", line 4, in <module>

    from .generator import (

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\generator\__init__.py", line 4, in <module>

    from .base import (

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\invokeai\backend\generator\base.py", line 9, in <module>

    import diffusers

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\__init__.py", line 3, in <module>

    from .configuration_utils import ConfigMixin

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\diffusers\configuration_utils.py", line 29, in <module>

    from huggingface_hub import hf_hub_download

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\huggingface_hub\__init__.py", line 322, in __getattr__

    submod = importlib.import_module(submod_path)

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\importlib\__init__.py", line 126, in import_module

    return _bootstrap._gcd_import(name[level:], package, level)

  File "C:\Users\PC\AppData\Roaming\itch\apps\invokeai\invokeai3_standalone\env\lib\site-packages\huggingface_hub\file_download.py", line 21, in <module>

    from filelock import FileLock

ModuleNotFoundError: No module named 'filelock'

I'm getting the same result after trying to upgrade. :(

(1 edit)

Sorry for the issues!

You can fix your installation, by
1) Download this patch (1 MB)
2) Put it in your invokeai3_standalone folder
3) Unzip it

All new downloads of the standalone already have this patch.

I'm so  excited about 3.0, finally having image organization features! Any ETA for the update on the standalone?

(+1)

I'm compressing the new standalone atm. :) So tomorrow it should be ready.

(+1)

Advice:
Can you show in the download name or above what version this is right now, every time you update it ?
So that everyone who comes here always knows what version he or she is downloading ?

(+1)

Good idea! :)
Btw. first version is up. Would be cool if you could download and try it, so we can see if there are any issues.

Deleted 301 days ago

Official standalone.

Loading...

E:\AI\invokeai_3_0_0_standalone/env/python.exe: can't open file 'E:\\AI\\invokeai_3_0_0_standalone\\env\\Scripts\\invoke.exe': [Errno 2] No such file or directory


new standalone version dont work

Did you use the update script or download the new version directly? :3
The update script sadly won't work. :( There were too many changes.

how to add sampler?

I runned the update script and it auto updated to v3.0.0b5 and now the launcher no longer works, displaying the following error: ModuleNotFoundError: No module named 'invokeai.app'. Would it be possible to downgrade using the updater? (I don't remember the version that comes with this standalone tho)

Downgrading sadly doesn't work. :(
But you can download it again and copy the old output folder to your new downloaded version. Then you should be able to use all your images.

Sorry for the inconvenience. :(

Using an NVIDIA 1660 Super, it works quite well, the results are quite satisfactory. Are there plans to incorporate SDXL in the future?

(+1)

As soon as it's officially released, it will be implemented by the InvokeAI team. :)

(+1)

could you please update to version v2.3.5.post2?, as the updater that is available in the program does not seem to update anything after running, thank you very much, I love the program, it is one of the best there is at the moment.

你好,我已经将LORA放到你启动器指出的文件夹里,但是打开以后,还是提示没有LRRA,重启也无用.然后想询问一下,能不能更新多语言,可以出个中文版本.

I got this error trying to update invokeAi 

An error occurred during installation htfs.Open (initial request): in conn.Connect, non-retriable error: in conn.tryConnect, got HTTP non- 2XX: api.itch.io: HTTP 400: {"errors":["invalid upload"]}

(2 edits)

open itch.io with admin

worked for me i guess

Edit:

On the files theres an update script

(+1)

hey do you know why ? apparently i also got this issue

(+2)

On an old pc, GTX 1070. Up and running with one click install from Itch app. Good images, comparable generation time to the previous aiimages program. So many new features to learn! Thanks much!

(+1)

After clicking update file, it did an update and now it's super slow.

Hi, thanks for reaching out!
There was a broken update. To fix that, download InvokeAI again, move and move your output folder to the new installation.

I'm currently checking if the update script is okay to use again. :) Sorry for the trouble!

I'm not sure at this point as it has been a while. It was definitely after updating but can't say if it was like that from the first attempts. It's still happening and I have to launch from the .exe file to get it to launch. Trying to launch it from itch app, doesn't work 9/10 times for me now.

(+1)

Lately, it takes forever (and sometimes stalls) to run Invoke AI. Not sure where to ask about it. I've reinstalled it and saw no difference. Is anyone else experiencing the same?

Hi, thanks for reaching out!

Did the problem occur after trying to use the update.bat? That one introduced a problem a while back (I'lll have to check if it's fixed by now).

Hey, when in the Canvas mode everything generated looks very bad, even after tweaking the settings countless times and it looking good in text to image mode… what can I do?

(1 edit)

Download a model from civitai

Hi, thanks for reaching out! :)

If this is still a problem, can you send me a screenshot of your canvas (and the settings) on Discord? (username: Sunija or Sunija#6598) Then I can check if I see the issue. :)

(1 edit)

i just get a black screen when trying to create a picture. ive heard that gtx 1660 ti has some errors. i already tried putting it on float16 before doing that it wouldnt even begin to load a picture

EDIT I now got it to work with float 32 but my loras cant be loaded now. it says No loras found...

Any fix?

i  get this error whilst trying to render an image any tips?

(-1)

Hi by any chance this work on Intel arc GPU?

Is it possible with a CPU???? (Intel core i3-61000 3.70GHz)

Yes, but CPU is really slow (1 image every 10min).
And, in the current version you might have to add "--no-xformers" in the third to last line of helper.bat to run it on the CPU. :/

Viewing most recent comments 21 to 60 of 180 · Next page · Previous page · First page · Last page