Skip to main content

Local 940X90

Comfyui workflow png download reddit


  1. Comfyui workflow png download reddit. Mar 30, 2023 · The complete workflow you have used to create a image is also saved in the files metadatas. But, of the custom nodes I've come upon that do webp or jpg saves, none of them seem to be able to embed the full workflow. But reddit will strip it away. Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. Support for SD 1. open after the protest of Reddit killing open Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy the frames in the corresponding input folder (important, saved as 000XX. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. If the PNG is the original one from ComfyUI then it should contain the workflow. Flux Schnell is a distilled 4 step model. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. For your all-in-one workflow, use the Generate tab. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. Scan this QR code to download the app now possible to load a workflow or drag one in ComfyUI with a PNG image. Just my two cents. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now comfy uis inpainting and masking aint perfect. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. I can load default and just render that jar again … but it still saves the wrong workflow. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. From the windows file manager simply drag a . Apr 22, 2024 · Workflows are JSON files or PNG images that contain the JSON data and can be shared, imported, and exported easily. You can simply open that image in comfyui or simply drag and drop it onto your workflow canvas. This works on all images generated by ComfyUI, unless the image was converted to a different format like jpg or webp. I had to place the image into a zip, because people have told me that Reddit strips . 8). com or https://imgur. Pulled latest from github. png image file onto the ComfyUI workspace. 0 and refiner and installs ComfyUI OP probably thinks that comfyUI has the workflow included with the PNG, and it does. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. The default SaveImage node saves generated images as . I feel like if you are reeeeaaaallly serious about AI art then you need to go comfy for sure! Also just transitioning from a1111 hence using a custom clip text encode that will emulate the a1111 prompt weighting so I can reuse my a1111 prompts for the time being but for any new stuff will try to use native comfyUI prompt weighting. Anyone ever deal with this? This missing metadata can include important workflow information, particularly when using Stable Diffusion or ComfyUI. You can use () to change emphasis of a word or phrase like: (good code:1. I tried to find either of those two examples, but I have so many damn images I couldn't find them. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. x, 2. Then open or drop the PNG in CompyUI. Hope you like some of them :) Will load a workflow from JSON via the load menu, but not drag and drop. A collection of workflows for the ComfyUI Stable Diffusion AI image generator. Otherwise, please change the flare to "Workflow not included" My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. SDXL 1. After learning auto1111 for a week, I'm switching to Comfy due the rudimentary nature of extensions for everything and persisting memory issues with my 6GB GXT1660. How to use the workflows. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. This makes it potentially very convenient to share workflows with other. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. Download ComfyUI either I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Welcome to r/aivideo! 🍿🥤 A community focused on the use of FULL MOTION VIDEO GENERATIVE A. More to come. 0 and refiner and installs ComfyUI Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. Method 1: Drag & Drop. You can save the workflow as a json file with the queue control panel "save" workflow button. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Tried multiple PNG and JSON files, including multiple known-good ones. I removed all custom nodes. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Getting an issue where whatever I generate - a bogus workflow I used a few days ago is saving … and when I try to load the png - it brings up wrong workflow - and fails to render anything if I hit queue. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. 0 and refiner and installs ComfyUI If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) My current workflow is kinda weird lol. Comfy has clearly taken a smart and logical approach with the workflow GUI, at least from a programmer's point of view. I think it was 3DS Max. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. but mine do include workflows for the most part in the video description. No errors in the shell on drag and drop, nothing on the page updates at all. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I failed a lot of times before when just using an img2img method, but with controlnet i mixed both lineart and depth to strengthen the shape and clarity if the logo within the generations. 2) or (bad code:0. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. This should import the complete workflow you have used, even including not-used nodes. pngs of metadata. Just started with ComfyUI and really love the drag and drop workflow feature. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Welcome to the unofficial ComfyUI subreddit. and no workflow metadata will be saved in any image. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling I'll do you one better, and send you a png you can directly load into Comfy. Comfy Workflows Comfy Workflows. I'm currently running into certain prompts where latent just looks awful. ASSISTANTS such as OPEN AI SORA, RUNWAY, PIKA LABS, SVD and similar AI VIDEO tools capable of TEXT TO VIDEO, IMAGE TO VIDEO, VIDEO TO VIDEO, AI VOICE OVER ACTING, AI MUSIC, AI NEWSROOM, live action AI CGI VFX and AI VIDEO EDITING WELCOME TO THE FUTURE!! 8. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. But it is extremely light as we speak, so much so Protip: If you want to use multiple instance of these workflows, you can open them in different tabs in your browser. No, because it's not there yet. This is a subreddit for the discussion, and posting, of AI generated furry content. Please keep posted images SFW. This includes yiff… A transparent PNG in the original size with only the newly inpainted part will be generated. It'll create the workflow for you. Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. I. The Real Housewives of Atlanta; The Bachelor; Sister Wives; 90 Day Fiance; Wife Swap; The Amazing Race Australia; Married at First Sight; The Real Housewives of Dallas Trying downloading the PNG (download it from the image, not from the post's image gallery, which is a preview in jpeg). The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. py. Where ever you launch ComfyUI from, python main. I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. How do I download the workflow?The entire workflow is embedded in the workflow picture itself. the example pictures do load How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting Welcome to the unofficial ComfyUI subreddit. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. Here are a few places where experts and enthusiasts share their ComfyUI ComfyUI_Workflows. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. ai/profile/neuralunk?sort=most_liked. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. But let me know if you need help replicating some of the concepts in my process. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! Here are approx. Instead, I created a simplified 2048X2048 workflow. Although the rendering takes ages I’m currently reasonably satisfied with the results. An example of the images you can generate with this workflow: Not sure if my approach is correct or sound, but if you go to my other post - the one on just getting started- and download the png and throw it into ComfyUi you’ll see the node setup I sort of cobbled together. So OP, please upload the PNG to civitai. The Solution To tackle this issue, with ChatGPT's help, I developed a Python-based solution that injects the metadata into the Photoshop file (PNG). Save one of the images and drag and drop onto the ComfyUI interface. Download the zip and unzip and place the files in a folder of your choice. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. I generated images from comfyUI. png files, with the full workflow embedded, making it dead simple to reproduce the image or make new ones using the same workflow. Share, discover, & run thousands of ComfyUI workflows. If you see a few red boxes, be sure to read the Questions section on the page. And while I'm posting the link to the CivitAI pageagain, I could also mention that I added a little prompting guide on the side of the workflow. com and then post a link back here if you are willing to share it. Layer copy & paste this PNG on top of the original in your go to image editing software. I'm revising the workflow below to include a non-latent option. png) Txt2Img workflow ComfyUI . Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. Idk why its giving me a "character subject" Model I use is dynavision Positive prompt: "Capture the breathtaking beauty of the celestial night sky, filled with stars, planets, and the Milky Way, using a long exposure technique with a 35mm lens to reveal the intricate details of the cosmos in a high-resolution composition, presenting it in a Photographic I am utilizing the cached image feature of the Image Sender/Receiver nodes to generate a batch of four images, and then I choose which ones I want to upscale (if any) and queue another prompt. . will now need to become python main. 1. The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. py --disable-metadata. json file or a . The png files produced by ComfyUI contain all the workflow info. Save the new image. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. 4K subscribers in the aiyiff community. Sure, it's not 2. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. rid mqxd rzznvk gddhf aiim lmjthbj mytw ctcaodz svlhz hlkzo