Comfyui workflow examples reddit. New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). I know how to combine the two workflows so turbo feeds SVD, and I assume I can block ComfyUI for product images workflow. Thanks for the video, here is a tip at the start of the video show an example of why we should watch the video, at this example show us 1pass vs 3pass. Nobody's responded to this post yet. After each step the first latent is down scaled and composited Welcome to the unofficial ComfyUI subreddit. Discussion, samples, tips and tricks on the Sigma FP. 2) or (bad code:0. Hi, I am fairly new to ComfyUI stable diffusion, and I must say that the whole AI image generation field really captivated me. I don’t know why there these example workflows are being done so compressed together. I'm new to ComfyUI and trying to understand how I can control it. its not that big workflows are better. WAS suite has some workflow stuff in its github links somewhere as well. 5" to reduce noise in the resulting image. Reference image analysis for extracting images/maps for use with ControlNet. I feel like this is possible, I am still semi new to Comfy. I haven't really shared much and want to use other's ideas as eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. That's how you use an existing image essentially as the base of what you're generating. 1. AP Workflow 5. Looking forward to seeing your workflow. Let's break down the main parts of this workflow so that you can understand it better. ago. Nothing special but easy to build off of. This is the image in the file, converted to a jpg. I can load the comfyui through 192. Fetch Updates in the ComfyUI Manager to be sure that you have the latest version. Applying "denoise:0. Explore thousands of workflows created by the community. rgthree context node workflow help. Please share your tips, tricks, and workflows for using this…. Going to python_embedded and using python -m pip install compel got the nodes working. Don't forget to prompt light type in the text field under the 'IC-Light Diffusers Sample' node. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. The documentation is remarkably sparse and offers very little in the way of explaining how to implement it effectively. Thanks tons! That's the one I'm referring 4 - The best workflow examples are through the github examples pages. Step one: Hook up IPAdapter x2. Allows you to choose the resolution of all output resolutions in the starter groups. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. 168. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. 9. it is about multi prompting, multi pass workflows and basically how to set up a really good workflow for pushing your own projects to the next level. We have four main sections: Masks, IPAdapters, Prompts, and Outputs. And above all, BE NICE. I find node workflow very powerfull, but very hard to navigate inside. Select multiple nodes. That's a bit presumptuous considering you don't know my requirements. 1 that are now corrected. usually the smaller workflows are more efficient or make use of specialized nodes. I would like to include those images into From only 3 frames and it followed the prompt exactly and imagined all the weight of the motion and timing! And the sparsectrl rgb is likely aiding as a clean up tool and blend different batches together to achieve something flicker free. So when I saw the recent Generative Powers of Ten : r/StableDiffusion (reddit. Saw lots of folks struggling with workflow setups and manual tasks. Ipadaptor for all. #2 is especially common: when these 3rd party node suites change, and you update them, the existing nodes spot working because they don't preserve backward compatibility. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the Welcome to the unofficial ComfyUI subreddit. Now it also can save the animations in other formats apart from gif. It still in the beta phase, but there are some ready-to-use workflows you can try. will output this resolution to the bus. 🖼️ Gallery and cover images: Every image you generate will be saved in the gallery corresponding to the current workflow. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. on my system with a 2070S (8gb vram), ryzen 3600, 32gb 3200mhz ram the base generation for a single image took 28 seconds to generate and then took and additional 2 minutes and 32 seconds to refine. THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. I see examples with 200+ nodes on that site. Any ideas on this? The image likely does not have the workflow embedded. Using "ImageCompositeMasked" to remove the background from the character image and align it with the background image. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. com) video, I was pretty sure the nodes to do it already exist in comfyUI. This is what the workflow looks like in ComfyUI: This image contain the same areas as the previous one but in reverse order. - Latent Upscale - glorified IMG2IMG and will result in subtle changes- Upscale Model (like ESRGAN, Swin, etc Welcome to the unofficial ComfyUI subreddit. Follow Scott Detweiler on youtube. I will explore this stuff now and share my workflows along the way. EDIT: the website https://comfyui. However, this is the first and only feature for this extension for now, and selling progress is complex topic, I might not use it as an example. Sort by: Add a Comment. 'FreeU_V2' for better contrast and detail, and 'PatchModelAddDownscale' so you can generate at a higher resolution. In researching InPainting using SDXL 1. 5 denoise. Masks . The closer your starting steps are to the total steps, the lower the denoise. 5 txt>img workflow if anyone would like to criticize or use it. 20K subscribers in the comfyui community. Combined Searge and some of the other custom nodes. So 20 steps with 15 starting steps is . Here's a quick example where the lines from the scribble actually overlap with the pose. g. Examples of what you can pull off with the camera are cooler than pictures of the camera (even though we love those pics, too). Same workflow as the image I posted but with the first image being different. Anyone else going through that? Example of what I mean here: Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. Simple ComfyUI Img2Img Upscale Workflow. A lot of people are just discovering this technology, and want to show off what they created. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) r/StableDiffusion • A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 Welcome to the unofficial ComfyUI subreddit. Either it was not made by ComfyUI, or it's New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). If gligen bounding boxes touch each other they will blend at the edges. looping through and changing values i suspect becomes a issue once you go beyond a simple workflow or use custom nodes. The whole point is to allow the user to setup an interface with only the input and output he wants to see, and to customize and share it easily. Basically, Two nodes are doing the heavy lifting. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. Try bypassing both nodes and see how bad the image is by comparison. Remove the node from the workflow and re-add it. 🔊 More audio reactivity exploration. Generating separate background and character images. 20 steps with 10 starting steps is . , expanding circles on the notes that stand out. Now you can right click the canvas in any workflow and insert that template. Right-click the canvas (background) Click "save selected as template", and pick a name. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. Sort by: I know there is the ComfyAnonymous workflow but it's lacking. Maybe Comfy UI just need quick settings or previous settings like the all-in-on prompt extension saved that way people don't have to type it all again. You can encode then decode bck to a normal ksampler with an 1. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. I can not find any decent examples or explanations on how this works or best ways to implement it. Below is the simplest way you can use ComfyUI. Please feel free to criticize and tell me what I may be doing silly. In that case, you want to use a detailer instead of inpainting. Please include technical info (lens, iso, f-stop, etc) if you can. Please share your tips, tricks, and workflows for using this software to create your AI art. Got sick of all the crazy workflows. Google Powers Of Ten (nested images) example workflow. however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some arbitrary cloud GPU, our cloud sets up everything automatically so that there are no missing files/custom nodes Hello there. 24, 30) to improve the stability. Selecting a model Welcome to the unofficial ComfyUI subreddit. 0 is the first step in that direction. the base generation is quite a bit faster than the refining. Plus quick run-through of an example ControlNet workflow. 8). It is divided into distinct blocks, which can be activated with switches: Background remover, to facilitate the generation of the images/maps referred to in point 2. Not really the base workflow but just some ways to expand on it. I've added a bunch of examples to the workflow post with comparison images but I haven't put it through it's paces with old photographs. with python the easiest way i found was to grab a workflow json, manually change values you want to a unique keyword then with python replace that keyword with the new value. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. I also had issues with this workflow with unusually-sized images. This one uses a single IPAdapter image - Animation flow is controlled by the same looping circle QRCode controlnet but I layered it with a simple audio reactive mask that I put together in After Effects by ear. • 3 mo. Belittling their efforts will get you banned. So it does reimagine some things depending on your prompt, model, CFG and for this workflow in particular also based on how much freedom you give it. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows. I would like to include those images into I can load workflows from the example images through localhost:8188, this seems to work fine. 1. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. Plug that into the latent input in your Ksampler (where you usually plug in your empty latent image that lets you choose your image size). I've built a cloud environment with everything ready: preloaded nodes, models, and it runs smoothly. For example a faceswap with a decent detailer and upscaler should contain no more than 20 nodes. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. The latest ComfyUI update introduces the "Align Your Steps" feature, based on a groundbreaking NVIDIA paper that takes Stable Diffusion generations to the next level. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner Plug that into the latent input in your Ksampler (where you usually plug in your empty latent image that lets you choose your image size). This uses more steps, has less coherence, and also skips several important factors in-between. They do overlap. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. 0. ☺️🙌🏼🙌🏼. Is there any real breakdown of how to use the rgthree context and switching nodes. i'm finding the refining I'm perfecting the workflow I've named Pose Replicator . Here is my current 1. But I must point out that is, if you turn back to the early software history, one software sold with simple key is a common pattern, the buyer definitely could re-distribute the software with its key. or through searching reddit, the comfyUI manual needs updating imo. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. chat. Please keep posted images SFW. I am trying to understand how it works and created an animation morphing between 2 image inputs. 1:8188 but when i try to load a flow through one of the example images it just does nothing. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. I think the idea is not just the output image, but the whole interface A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments Apr 26, 2024 · Here you can download my ComfyUI workflow with 4 inputs. 2. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting Unveiling the Game-Changing ComfyUI Update. Introducing ComfyUI Launcher! new. Thank you for taking the time to help others. For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. To create this workflow I wrote a python script to wire up all the nodes. thedyze. Upscaling is done with iterative latent scaling and a pass with 4x-ultrasharp. Adding some updates to this since people are still likely coming here from a Google search and a lot has changed over the past several months. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. So, if you are using that, I recommend you to take a look at this new one. AnimateDiff Evolved in ComfyUI now can break the limit of 16 frames. For example, I want to combine the dynamic real time turbo generation with SVD, letting me quickly work towards an image I can then instantly click a button/toggle a switch to animate with SVD. While the normal text encoders are not "bad", you can get better results if using the special encoders Welcome to the unofficial ComfyUI subreddit. It's a bit messy, but if you want to use it as a reference, it might help you. this creats a very basic image from a simple prompt and sends it as a source. Step two: Set one to compositional and one to style weight. Sometimes it's easier to load a workflow 5-10 minutes ago than spend 15-30 seconds to reconnect and readjust settings. Later in some new tutorials ive been working on i'm going to cover the creation of various modules such as Welcome to the unofficial ComfyUI subreddit. Kosinkadink developer of ComfyUI-AnimateDiff-Evolved has updated the cutsom node with a new funcionality in the AnimateDiff Loader Advanced node, that can reach higher number of frames. Step three: Feed your source into the compositional and your style into the style. For example: 1-Enable Model SDXL BASE -> This would auto populate my starting positive and negative prompts and my sample settings that work best with that model. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. using the settings i got from the thread on the main SD sub. So in this workflow each of them will run on your input image and you Generating separate background and character images. The performance of this model highly depends on the input, for example, the difference between two input frames cannot be overly large (the model was not trained on that setting, so the success rate could be lower), and the fps parameter can be increased (e. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. To avoid this and keep the concept entirely in frame you need a pixel gap separation between the bounding box and the edge of image. 5 with lcm with 4 steps and 0. Breakdown of workflow content. Clicking on the gallery button will show you all the images and videos generated by this workflow! You can choose any picture as the cover image for the workflow, which will be displayed in the file list. You can see it's a bit chaotic in this case but it works. It’s closer, but still not as accurate as the sample images during training. Just wanted to say that there are a few ways you can perform a 'hires fix' now with ComfyUI. Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. 5. I recommend you do not use the same text encoders as 1. Hey all, another tutorial, hopefully this can help with anyone who has trouble dealing with all the noodly goodness of comfyUI, in it I show some good layout practices for comfyUI and show how modular systems can be built. If you use the same nodes he did, you'll adjust the starting steps. Adding a subject to the bottom center of the image by adding another area prompt. Thanks a lot for sharing the workflow. I had no idea you could do that with multiple nodes selected. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. You should be in the default workflow. Also, I noticed the original repo with Gradio has Bria background removal, adding new background and light match with that background, but current Comfy UI repo only offers the dynamic light placement. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. 5 and 2. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Best Comfyui Workflows, Ideas, and Nodes/Settings. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. Ferniclestix. I'm using ComfyUI portable and had to install it into the embedded Python install. This image contain 4 different areas: night, evening, day, morning. If a gligen bounding box touches the edge of the image the contained concept may not be fully in frame, ie resulting in a closeup/medium shot. It includes literally everything possible with AI image generation. This feature delivers significant quality improvements in half the number of steps, making your image generation process faster and Welcome to the unofficial ComfyUI subreddit. Welcome to the unofficial ComfyUI subreddit. Press go 😉. I'm matching sample settings in Kohya as closely as I can and using the same model, steps, cfg, scheduler, and generation seed. I would like to use ComfyUI to make marketing images for my product that is quite high tech and I have the images from the photo studio. Add your thoughts and get the conversation going. Input sources-. Poyojo. 25 denoise. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. ComfyUI for product images workflow. . Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. Text to image using a selection from initial batch. You can use () to change emphasis of a word or phrase like: (good code:1. Reply. Image generation (creation of the base image). Utilizing "KSampler" to re-generate the image, enhancing the integration between the background and the character. Lowering the "denoise" setting on the Ksampler will keep more and more of the original the way it was. I’ve tried with A1111, Forge, and now with Comfy with the most basic LoRA workflow I was able to find. Mar 20, 2024 · Don’t worry if the jargon on the nodes looks daunting. I'm thinking about a tool that allow user to create, save, and share UI based on ComfyUI workflow. ch sv ie nq oc gf xl qu ba nt