Comfyui upscale image reddit. . Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. Welcome to the unofficial ComfyUI subreddit. upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. I gave up on latent upscale. We introduced a Freedom parameter that will drive how much new detail will be introduced in the upscaled image. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. Depending on the noise and strength it end up treating each square as an individual image. There's "latent upscale by", but I don't want to upscale the latent image. natural or MJ) images. "LoadImage / Load Image" "Upscale Model Loader / Load Upscale Model" "ImageUpscaleWithModel / Upscale Image (using Model)" "Image Save / Image Save" or "SaveImage / Save Image" That will upscale with no latent invention/injection of creative bits, but still intelligently adds pixels per ESRGAN upscaler models. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Is there benefit to upscaling the latent instead? Some images made with my next model, Aether Real SDXL. It fixes issues with bad skin on the base model. this is just a simple node build off what's given and some of the newer nodes that have come out. There’s only so much you can do with an SD1. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Please keep posted images SFW. This. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged pixels introduced from your initial upscale. (Optional) Upscale to 3x by Default and using ControlNet to stick to base image, speed provided by Automatic CFG. 0 Alpha + SD XL Refiner 1. second pic. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Latent quality is better but the final image deviates significantly from the initial generation. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). py, in order to allow the the 'preview image' node to At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. it's nothing spectacular but gives good consistent results without This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). ( I am unable to upload the full-sized image. I have been generally pleased with the results I get from simply using additional samplers. 9 , euler Once I've amassed a collection of noteworthy images, my plan is to compile them into a folder and execute a 2x upscale in a batch. 5, euler, sgm_uniform or CNet strength 0. Heres an example with some math to double the original images resolution Bella donna Italiana - 8K image - ComfyUI + DreamshaperXL + TiledDiffusion + Kohya deep shrink - latent upscale + clipvision and my poor 4060ti upvotes · comment r/ProGolf Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. No negatives needed. No matter what, UPSCAYL is a speed demon in comparison. Last two images are just “a photo of a woman/man”. You could try to pp your denoise at the start of an iterative upscale at say . Vase Lichen. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. This is the fastest way to test images vs an image I have a higher rez sample of for testing. Thanks for all your comments. FYI, values closer to 1 will stick to your input image more, while value closer to 10 allows more creative freedom but may introduce unwanted elements in your new Generate initial image at 512x768 Upscale x1. Do the same comparison with images that are much more detailed, with characters and patterns. k. upscale by model will take you up to like 2x or 4x or whatever. I want to upscale my image with a model, and then select the final size of it. Pause/Preview images to proceed forward in workflow. And above all, BE NICE. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. Before. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. go up by 4x then downscale to your desired resolution using image upscale. The final node is where comfyui take those images and turn it into a video. Thanks. This works best with Stable Cascade images, might still work with SDXL or SD1. It will replicate the image's workflow and seed. After that I send it through a face detailer and an ultimate sd upscale. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. Please share your tips, tricks, and workflows for using this software to create your AI art. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. Uses face Detailer to enhance faces if required. Still working on the the whole thing but I got the idea down You guys have been very supportive, so I'm posting here first. Ideally, I'd love to leverage the prompt loaded from the image metadata (optional), but more crucially, I'm seeking guidance on how to efficiently batch load images from a folder for subsequent upscaling. Girl with flowers. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Because the upscale model of choice can only output 4x image and they want 2x. It is intended to upscale and enhance your input images. These comparisons are done using ComfyUI with default node settings and fixed seeds. 5 denoise (needed for latent idk why though) through a second ksample. This is not the case. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. We would like to show you a description here but the site won’t allow us. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. A lot of people are just discovering this technology, and want to show off what they created. the title says it all, after launching a few batches of low res images I'd like to upscale all the good results. 0. I upscaled it to a resolution of 10240x6144 px for us to examine the results. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. No attempts to fix jpg artifacts, etc. After borrowing many ideas, and learning ComfyUI. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler 4 days ago · In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. A step-by-step guide to mastering image quality. Belittling their efforts will get you banned. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Grab the image from your file folder, drag it onto the entire ComfyUI window. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. So I basically want to select multiple images from my drive so that the upscaler scales all the images I have selected, using the same sampler settings and whatnot. If I want larger images, I upscale the image. I have a custom image resizer that ensures the input image matches the output dimensions. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. This way I can upscale my images while I am away from my system. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. a. I haven't been able to replicate this in Comfy. I created a workflow with comfy for upscaling images. The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. I only have 4gb of nvidia vram, so large images crash my process. Hi, guys. After. Is this possible? Thanks! Welcome to the unofficial ComfyUI subreddit. Easy prompting to achieve good results. X values) if you want to benefit from the higher res processing. Hires fix with add detail lora. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Until now I was launching a pipeline on each image one by one, but is it possible to have an automatic iterative task to do this? I would give the input directory and the pipeline would run by itself on each image. Enhance image by adding HDR effects. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb. You end up with images anyway after ksampling so you can use those upscale node. Instead, I use Tiled KSampler with 0. The best method I Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. But I probably wouldn't upscale by 4x at all if fidelity is important. g. I've played around with different upscale models in both applications as well as settings. 9, end_percent 0. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions I was running some tests last night with SD1. 2x upscale using lineart controlnet. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. 5 models (seems pointless to go larger). Overall: - image upscale is less detailed, but more faithful to the image you upscale. All images except the last two made by Masslevel. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Ugh. Does anyone have any suggestions, would it be better to do an ite A homogenous image like that doesn't tell the whole story though ^^. Save image with meta data. ) You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. 5 model, since their training was done at a low resolution. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. Thank I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. The resolution is okay, but if possible I would like to get something better. Using ComfyUI, you can increase the siz For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). 2x upscale using Ultimate SD Upscale and TileE Controlnet. This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Both these are of similar speed. 5, but appears to work poorly with external (e. Upscaled by ultrasharp 4x upscaler. The workflow is kept very simple for this test; Load image Upscale Save image. 2 options here. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. 6 denoise and either: Cnet strength 0. There are also "face detailer" workflows for faces specifically. There is a face detailer node. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo! - YouTube. This is done after the refined image is upscaled and encoded into a latent. This means that your prompt (a. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. joau jzzlby svncf ufuvgr ecqh plydt nctkzwo bzooc rdyi zbbf