Skip to main content
Join
zipcar-spring-promotion

Comfyui workflows animatediff

5 model (SDXL should be possible, but I don't recommend it because the video generation speed is very slow) LCM (Improve video generation speed,5 step a frame default,generating a 10 second video takes about 700s by 3060 laptop) How to use: You can change the model Jan 2, 2024 · Install the ComfyUI dependencies. This should usually be kept to 8 for AnimateDiff, or Jan 4, 2024 · I’m thrilled to share the latest update on the AnimateDiff flicker-free workflow within ComfyUI for animation videos—a creation born from my exploration into the world of generative AI. Click the Manager button in the main menu. Look for the example that uses controlnet lineart. The Workflow is divided into 5 parts : Part 1 - ControlNet Passes Export Part 2 - Animation Raw - LCM Part 3 - AnimateDiff Refiner - LCM Part 4 - AnimateDiff Face Fix - LCM Part 5 - Batch Face Swap - ReActo r [Optional] [Experimental] What this workflow does This workflow can convert any (cartoon/realistic/anime) video into another (Cartoon Tip 3: This Workflow Can also be used as vid2vid style conversion, Just Input the Original Source Frames as Raw Input and Denoise upto 0. - I am using after comfyUI with AnimateDiff for the animation, you have the full node in image here , nothing crazy. Comfy Workflows CW. 「私の生成したキャラが、いい感じに This ComfyUI workflow is designed for creating animations from reference images by using AnimateDiff and IP-Adapter. I can never get a perfectly crisp visual. It is a plug-and-play module turning most community models into animation generators, without the need of additional training. 6. Belittling their efforts will get you banned. We will use the following two tools, ComfyUI+AnimateDiff+ControlNet+IPAdapter视频转动画重绘. py --force-fp16. Mar 25, 2024 · attached is a workflow for ComfyUI to convert an image into a video. Then, manually refresh your browser to clear the cache and Apr 24, 2024 · This ComfyUI AnimateDiff workflow is designed for users to delve into the sophisticated features of AnimateDiff across AnimateDiff V3, AnimateDiff SDXL, and AnimateDiff V2 versions. Where there is hatred, let me sow love; where there is doubt, let's get some data and build a model. Creating Passes: Two types of passes are necessary—soft Edge and open pose. Be prepared to download a lot of Nodes via the ComfyUI manager. This means that even if you have a lower-end computer, you can still enjoy creating stunning animations for platforms like YouTube Shorts, TikTok, or media advertisements. Most settings are the same with HotshotXL so this will serve as an appendix to that guide. Discord Sign In. Jan 18, 2024 · A: To refine the workflow, load the refiner workflow in a new ComfyUI tab and copy the prompts from the raw tab into the refiner tab. Upscaling ComfyUI workflow. A lot of people are just discovering this technology, and want to show off what they created. Also Suitable for 8GB Ram GPUs That would be any animatediff txt2vid workflow with an image input added to its latent, or a vid2vid workflow with the load video node and whatever's after it before the vaeencoding replaced with a load image node. AnimateDiff Models. Clone this repository to your local machine. We will provide an in-depth review of the AnimateDiff workflow, specifically version 8. Building Upon the AnimateDiff Workflow. ComfyUI AnimateDiff and ControlNet Morphing Workflow. Combines a series of images into an output video. このエラーは ComfyUI-AnimateDiff-Evolved用のワークフローをArtVentureX版AnimateDiffで使おうとしたために起きていたエラー でした。. I have upgraded the previous animatediff model to the v3 version and updated the workflow accordingly, resulting in newly Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Generating and Organizing ControlNet Passes in ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes. Nov 24, 2023 · Animatediff Workflow: Openpose Keyframing in ComfyUI. This code draws heavily from Cubiq's IPAdapter_plus, while the workflow uses Kosinkadink's Animatediff Evolved and ComfyUI-Advanced-ControlNet, Fizzledorf's Fizznodes, Fannovel16's Frame Interpolation and more. 2. However, we use this tool to control keyframes, ComfyUI-Advanced-ControlNet. 而ComfyUI则提供了便捷的工作流,使得生成视频和动画的过程更加简单易行。. After a quick look, I summarized some key points. 所以稍微看了一下之後,整理出一些重點的地方。首先,我們放置 ControlNet 的地方還是一樣,只是,我們利用這個工具來做關鍵幀(Keyframe)的控制, ComfyUI-Advanced-ControlNet. Please keep posted images SFW. . If you have missing (red) nodes, click on the Manager and then click Install Missing Custom Nodes to install them one-by-one. Comfyui has stable swarm and that has a lot of potential. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. ComfyUI-Advanced-ControlNet for loading files in batches and controlling which latents should be affected by the ControlNet inputs (work in progress, will include more advance workflows + features for AnimateDiff usage later). The core of this process lies in the strategic use of AutoMask, which plays a crucial role in defining and isolating the specific area for the visual transformation. May 15, 2024 · This quick tutorial will show you how I created this audioreactive animation in AnimateDiff. 4. AnimateDiffノードは、アニメーションのダイナミクスを調整する Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. ·. - First I used Cinema 4D with the sound effector mograph to create the animation, there is many tutorial online how to set it up. It is not necessary to input black-and-white videos Our robust file management capabilities enable easy upload and download of ComfyUI models, nodes, and output results. ComfyUI AnimateDiff Workflow. I am using it locally to test it, and after to Jun 17, 2024 · Install this extension via the ComfyUI Manager by searching for AnimateDiff Evolved. This node simplifies the process of preparing your animation projects, allowing you to focus on the creative aspects rather than the technical setup. SDXL Default ComfyUI workflow. First, the placement of ControlNet remains the same. Jan 22, 2024 · Anthony Quoc Anh Doan - Ramblings of a Happy Scientist An instrument of peace. 1. ComfyUI AnimateDiff, ControlNet and Auto Mask Workflow. py; Note: Remember to add your models, VAE, LoRAs etc. I have a 3060ti 8gb Vram (32gb Ram) and been playing with Animatediff for weeks. This ComfyUI workflow, which leverages AnimateDiff and ControlNet TimeStep KeyFrames to create morphing animations, offers a new approach to animation creation. ipadapter. Please share your tips, tricks, and workflows for using this software to create your AI art. Best part since i moved to Comfyui (Animatediff), i can still use my PC without any lag, browsing and watching movies while its generating in the background. Description. If the optional audio input is provided, it will also be combined into the output video. frame_rate: How many of the input frames are displayed per second. Feb 17, 2024 · ComfyUI Starting Guide 1: Basic Introduction to ComfyUI and Comparison with Automatic1111. ) You can adjust the frame load cap to set the length of your animation. By combining ControlNets with AnimateDiff exciting opportunities, in animation are unlocked. No frame of mind swapping. share, run, and discover comfyUI workflows. 相比于AnimateDiff,它的最大特点在于速度更快,生成高质量视频所需步数大大减少。. A while back there were a number of competing vid2vid animation workflows: deforum, warpfusion, EBSynth. Open the provided LCM_AnimateDiff. Select Custom Nodes Manager button. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. Img2Img ComfyUI workflow. We've introdu Install the ComfyUI dependencies. ControlNet Workflow. Jun 17, 2024 · Install this extension via the ComfyUI Manager by searching for AnimateDiff Evolved. 👍 If you found this tutorial helpful, give it a thumbs up, share it with your fellow creators, and hit the bell icon to stay updated on my latest content! L AnimateDiff. © Civitai 2024. Following an overview of creating 3D animations in Blender, we delve into the advanced methods of manipulating these visuals using ComfyUI, a tool Apr 26, 2024 · In this workflow, we employ AnimateDiff and ControlNet, featuring QR Code Monster and Lineart, along with detailed prompt descriptions to enhance the original video with stunning visual effects. ArtVentureX版AnimateDiffをDisableにした上で、再度ComfyUI-AnimateDiff-Evolvedをアンインストール → インストールし Mar 13, 2024 · ComfyUI workflow (not Stable Diffusion,you need to install ComfyUI first) SD 1. Discover how to use AnimateDiff and ControlNet in ComfyUI for video transformation. The guide are avaliable here: You signed in with another tab or window. Comfy Workflows Comfy Workflows. And you can also the use these images for refiner again :D in Tip 2 _____ 3_0) AnimateDiff Refiner_v3. Precise Transformations with AutoMask. Expanding on this foundation I have introduced custom elements to improve the processs capabilities. This workflow involves loading multiple images, creatively inserting frames through the Steerable Motion custom node, and converting them into silky transition videos using Animatediff LCM. Once I had a contender I upscaled it using Topaz and then brought it into Premier to add music, make some color adjustments, titles and export the final version. Please follow Matte If we're being really honest, the short answer is that AnimateDiff doesn't support init frames, but people are working on it. Encompassing QR code, Interpolation (2step and 3step), Inpainting, IP Adapter, Motion LoRAs, Prompt Scheduling, Controlnet, and Vid2Vid. ComfyUI AnimateDiff and Batch Prompt Schedule Workflow. Each serves a different purpose in refining the animation's accuracy and realism. What this workflow does This workflow utilized "only the ControlNet images" from external source which are already pre-rendered before hand in Part 1 of this workflow which saves GPU's memory and skips the Loading time for controlnet (2-5 second delay Oct 25, 2023 · ComfyUI-LCMを含むワークフローを実行するとモデルが自動でダウンロードされたので、事前のダウンロードは不要でした。 なお現時点では、StableDiffusion1. So AnimateDiff is used Instead. json file as well as a png that you can simply drop into your ComfyUI workspace to load everything. The ComfyUI workflow presents a method for creating animations with seamless scene transitions using Prompt Travel (Prompt Schedule). video. Any issues or questions, I will be more than happy to attempt to help when I am free to do so 🙂 Created by: Datou: I tried to make the character's expression change, but failed. Apr 24, 2024 · 4. New Workflow sound to 3d to ComfyUI and AnimateDiff. You signed out in another tab or window. Explore thousands of workflows created by the community. The AnimateDiff node integrates model and context options to adjust animation dynamics. Load your animated shape into the video loader (In the example I used a swirling vortex. (for 12 gb VRAM Max is about 720p resolution). View the Note of each nodes. Then animatediff came along and I guess beat out all the rest. Then, create a new folder to save the refined renders and copy its path into the output path node. such a beautiful creation, thanks for sharing. Ele facilita a exploração de uma ampla gama de animações, incorporando vários movimentos e estilos. What this workflow does Add more Details to the SVD render, It uses SD models like epic realism (or can be any) for the refiner pass. ControlNet Latent keyframe Interpolation Welcome to the unofficial ComfyUI subreddit. This smoothens your workflow and ensures your projects and files are well-organized, enhancing your overall experience. If i got an idea I can just go for it. Downloading different Comfy workflows and experiments trying to address this problem is a fine idea, but OP shouldn't get their hopes up too high, as if this were a problem that had been solved already. upscale. Launch ComfyUI by running python main. Note that --force-fp16 will only work if you installed the latest pytorch nightly. AnimateDiff is dedicated to generating animations by interpolating between keyframes—defined frames that mark significant Oct 26, 2023 · Drag and drop the workflow into the ComfyUI interface to get started. The Power of ControlNets in Animation. Introducing ComfyUI Launcher! new. With cli, auto1111 and now moved over to Comfyui where it's very smooth and i can go higher in resolution even. If the nodes are already installed but still appear red, you may have to update them: you can do this by Uninstalling and Reinstalling them. Create animations with AnimateDiff. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. AnimateDiff is a powerful tool to make animations with generative AI. This area contains the settings and features you'll likely use while working with AnimateDiff. It is made by the same people who made the SD 1. A higher frame rate means that the output video plays faster and has less duration. Dec 27, 2023 · 前回のnoteはタイトルに「ComfyUI + AnimateDiff」って書きながらAnimateDiffの話が全くできなかったので、今回は「ComfyUI + AnimateDiff」の話題を書きます。. AnimateDiff for SDXL is a motion module which is used with SDXL to create animations. json file and customize it to your requirements. Reload to refresh your session. If you succeed, please leave a comment and tell me how, thank you. Feb 24, 2024 · ComfyUIでAnimateDiffを活用し、高品質なAIアニメーションはいかがですか?この記事では、ComfyUIの設定からAnimateDiffでアニメーションを作成する方法までを解説します。ぜひAnimateDifを使って、AIアニメーションの生成を楽しみましょう。 The first 500 people to use my link will get a 1 month free trial of Skillshare https://skl. You'll have to play around with the denoise value to find a sweetspot. Load the workflow, in this example we're using AnimateDiff-Lightning是一款由字节跳动基于AnimateDiff优化算法推出的全新视频和动画生成模型。. Load multiple images and click Queue Prompt. Today, I’m integrating the IP adapter face ID into the workflow, and together, let’s delve into a few examples to gain a better understanding of its Created by: CgTips: By using AnimateDiff and ControlNet together in ComfyUI, you can create animations that are High Quality ( with minimal artifacts) and Consistency (Maintains uniformity across frames). The entire comfy workflow is there which you can use. Once you enter the AnimateDiff workflow within ComfyUI, you'll come across a group labeled "AnimateDiff Options" as shown below. QR Code Monster introduces an innovative method of transforming any image into AI-generated art. After creating animations with AnimateDiff, Latent Upscale is ComfyUI-Advanced-ControlNet for loading files in batches and controlling which latents should be affected by the ControlNet inputs (work in progress, will include more advance workflows + features for AnimateDiff usage later). Merging 2 Images together. あなたがAIイラストを趣味で生成してたら必ずこう思うはずです。. Run the workflow, and observe the speed and results of LCM combined with AnimateDiff. 5 models. ControlNet Latent keyframe Interpolation. Apr 26, 2024 · Description. Sep 11, 2023 · 2023年9月10日 19:38. once you download the file drag and drop it into ComfyUI and it will populate the workflow. 3. Make your own animations with AnimateDiff. And above all, BE NICE. May 24, 2024 · 1. ComfyUI ワークフロー: AnimateDiff + IPAdapter | 画像からビデオへ. Created by: CG Pixel: with this workflow you can create animation using animatediff combined with SDXL or SDXL-Turbo and LoRA model to obtain animation at higher resolution and with more effect thanks to the lora model. 7. You switched accounts on another tab or window. 0 *Special Features are separated in different files due to technical limitation. v1 (latest) - ComfyUI+AnimateD img2img. AnimateDiff workflows will often make use of these helpful node packs: Jan 18, 2024 · 4. These instructions assume you have ComfyUI installed and are familiar with how everything works, including installing missing custom nodes, which you Description. Nov 13, 2023 · Introduction. A FREE Workflow Download is included for ComfyUI. Workflow ComfyUI AnimateDiff ini dirancang untuk pengguna yang ingin mendalami fitur-fitur canggih dari AnimateDiff di versi AnimateDiff V3, AnimateDiff SDXL, dan AnimateDiff V2. Jan 20, 2024 · DWPose Controlnet for AnimateDiff is super Powerful. Dec 10, 2023 · Update: As of January 7, 2024, the animatediff v3 model has been released. 25K subscribers in the comfyui community. After installation, click the Restart button to restart ComfyUI. We begin by uploading our videos, such, as a boxing scene stock footage. The thing is, I really don't like how animatediff sacrifices image quality. Explore the newest features, models, and node updates in ComfyUI and how they can be applied to your digital creations. Then, manually refresh your browser to clear the cache and Longer Animation Made in ComfyUI using AnimateDiff with only ControlNet Passes with Batches. This is a collection of AnimateDiff ComfyUI workflows. MULTIPLE IMAGE TO VIDEO // SMOOTHNESS. This guide assumes you have installed AnimateDiff. Thanks to all and of course the Animatediff team, Controlnet, others, and of course our supportive community! Check out the AnimateDiff evloved github. 5. In this Guide I will try to help you with starting out using this and… Civitai. It facilitates exploration of a wide range of animations, incorporating various motions and styles. img2vid. The ComfyUI Vid2Vid offers two distinct workflows to creating high-quality, professional animations: Vid2Vid Part 1, which enhances your creativity by focusing on the composition and masking of your original video, and Vid2Vid Part 2, which utilizes SDXL Style Transfer to transform the style of your video to match your desired aesthetic. created 7 months ago. AnimateDiff Settings: How to Use AnimateDiff in ComfyUI. If you have another Stable Diffusion UI you might be able to reuse the dependencies. AnimateDiff is dedicated to generating animations by interpolating between keyframes—defined frames that mark significant Jan 18, 2024 · This process highlights the importance of motion luras, AnimateDiff loaders, and models, which are essential for creating coherent animations and customizing the animation process to fit any creative vision. This powerful animation tool enhances your creative process and all May 15, 2024 · このComfyUIワークフローでは、Stable Diffusionフレームワーク内でAnimateDiffやControlNetなどのノードを組み込み、ビデオ編集の機能を拡張するビデオリスタイリングの方法論を採用しています。. it will change the image into an animated video using Animate-Diff and ip adapter in ComfyUI. In this tutorial, we explore the latest updates Stable Diffusion to my created animation workflow using AnimateDiff, Control Net and IPAdapter. Feb 10, 2024 · 1. このComfyUIワークフローは、 AnimateDiff と IP-Adapter を使用して、参照画像からアニメーションを作成するように設計されています。. I have been working with the AnimateDiff flicker process, which we discussed in our meetings. How to use AnimateDiff. Updated: 1/6/2024. These originate all over the web on reddit, twitter, discord, huggingface, github, etc. ControlNet Depth ComfyUI workflow. This resource has been removed by its owner. Configure ComfyUI and AnimateDiff as per their respective documentation. Introduction AnimateDiff in ComfyUI is an amazing way to generate AI Videos. Introduction. 【訂正】. This article discusses the installment of a series that concentrates on animation with a particular focus on utilizing ComfyUI and AnimateDiff to elevate the quality of 3D visuals. In ComfyUI the image IS the workflow. Share art/workflow. Table of contents. 4 mins read. Join Us 💼 Dec 4, 2023 · [GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide | Civitai. Welcome to the unofficial ComfyUI subreddit. Conversely, the IP-Adapter node facilitates the use of images as prompts in ways that can mimic the style, composition, or facial features of ComfyUI 中的 AnimateDiff 是生成 AI 视频的绝佳方式。 在本指南中,我将尽力帮助您开始使用它,并为您提供一些入门工作流程。 我在这里的尝试是尝试为您提供一个设置,为您提供开始制作自己的视频的起点。 Mar 20, 2024 · ComfyUI Vid2Vid Description. first : install missing nodes by going to manager then install missing nodes. Importing Images: Use the "load images from directory" node in ComfyUI to import the JPEG sequence. This technique enables you to specify different prompts at various stages, influencing style, background, and other animation aspects. Created by: traxxas25: This is a simple workflow that uses a combination of IP-Adapter and QR code monster to create dynamic and interesting animations. Simply drag or load a workflow image into ComfyUI! Simply drag or load a workflow image into ComfyUI! See the "troubleshooting" section if your local install is giving errors :) Oct 8, 2023 · AnimateDiff ComfyUI. I feel like if you are reeeeaaaallly serious about AI art then you need to go comfy for sure! Also just transitioning from a1111 hence using a custom clip text encode that will emulate the a1111 prompt weighting so I can reuse my a1111 prompts for the time being but for any new stuff will try to use native comfyUI prompt weighting. Conversely, the IP-Adapter node facilitates the use of images as prompts in ways that can mimic the style, composition, or facial features of We would like to show you a description here but the site won’t allow us. Created by: Jerry Davos: This workflow add animate diff refiner pass, if you used SVD for refiner, the results were not good and If you used Normal SD models for refiner, they would be flickering. ComfyUI+AnimateDiff+SDXL By leveraging this node, you can seamlessly integrate AnimateDiff models into your workflow, ensuring that all necessary components are correctly loaded and configured. This Oct 27, 2023 · Usage. Após criar animações com AnimateDiff, o Jan 16, 2024 · Animatediff Workflow: Openpose Keyframing in ComfyUI. also, would love to see a small breakdown on YT or here, since alot of us can't access tictok. We have developed a lightweight version of the Stable Diffusion ComfyUI workflow that achieves 70% of the performance of AnimateDiff with RAVE. Mar 25, 2024 · The zip file includes both a workflow . Here is a easy to follow tutorial. In essence, choosing RunComfy for running ComfyUI equates to opting for speed, convenience, and efficiency. This repository is the official implementation of AnimateDiff [ICLR2024 Spotlight]. sh/mdmz01241Transform your videos into anything you can imagine. Sep 22, 2023 · 前回の記事では、AI動画生成ツールのAnimateDiffと「ControlNet」を組み合わせることで、特定のモーションをアニメで再現しました。 今回は、ControlNetの「Tile」という機能を組み合わせて、2枚の画像を補間するアニメーションの生成を試します。 必要な準備 ComfyUI AnimateDiffの基本的な使い方について Explore thousands of workflows created by the community. Ini memfasilitasi eksplorasi berbagai animasi, menggabungkan berbagai gerakan dan gaya. Apr 26, 2024 · 1. ⚙ Jan 23, 2024 · 2. This transformation is supported by several key components, including May 17, 2024 · Este fluxo de trabalho ComfyUI AnimateDiff é projetado para os usuários se aprofundarem nos recursos sofisticados do AnimateDiff nas versões AnimateDiff V3, AnimateDiff SDXL e AnimateDiff V2. This way you can essentially do keyframing with different open pose images. 1. Load your reference image into the image loader for IP-Adapter. AnimateDiff は、テキストプロンプトをビデオコンテンツに変換すること I generated over 200 versions (many of those were just tests generating like the first 30 seconds) as I tweaked settings and tried different prompts, LoRas, models, etc. In these ComfyUI workflows you will be able to create animations from just text prompts but also from a video input where you can set your preferred animation for any frame that you want. Enter AnimateDiff Evolved in the search bar. Feb 26, 2024 · I has been applied to AI Video for some time, but the real breakthrough here is the training of an AnimateDiff motion module using LCM which improves the quality of the results substantially and opens use of models that previously did not generate good results. Automatic1111 i just have to switch to a new tab and the workflows are all there. By scheduling prompts at specific frames, you can effortlessly craft dynamic Oct 29, 2023 · How to use: 1/Split your video into frames and reduce to the FPS desired (I like going for a rate of about 12 FPS) 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. vid2vid. AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning. In comfy , the idea has to wait for me to figure out what workflow i need to load and what changes i have to make. As of writing of this it is in its beta phase, but I am sure some are eager to test it out. There's one workflow that gi Saved searches Use saved searches to filter your results more quickly Apr 26, 2024 · This workflow harnesses the capabilities of AnimateDiff, ControlNet, and AutoMask to create stunning visual effects with precision and ease. 6 - 0. This ComfyUI workflow is designed for creating animations from reference images by using AnimateDiff and IP-Adapter. The above animation was created using OpenPose and Line Art ControlNets with full color input video. Sep 6, 2023 · 「AnimateDiff」では簡単にショートアニメをつくれますが、プロンプトだけで思い通りの構図を再現するのはやはり難しいです。 そこで、画像生成でおなじみの「ControlNet」を併用することで、意図したアニメーションを再現しやすくなります。 必要な準備 ComfyUIでAnimateDiffとControlNetを使うために Jun 17, 2024 · How to Install AnimateDiff Evolved. 5系のモデルであるDreamshaper_v7のLCM版「 LCM_Dreamshaper_v7 」しか対応モデルがないので、これがダウンロードさ Jan 26, 2024 · ComfyUI + AnimateDiffで、AIイラストを 4秒ぐらい一貫性を保ちながら、 ある程度意図通りに動かしたいですよね! でも参照用動画用意してpose推定はめんどくさい! そんな私だけのニーズを答えるワークフローを考え中です。 まだワークフローが完成したわけでもなく、 日々「こうしたほうが良く Oct 5, 2023 · Showing a basic example of how to interpolate between poses in comfyui! Used some re-rerouting nodes to make it easier to copy and paste the open pose groups. This ComfyUI workflow introduces a powerful approach to video restyling, specifically aimed at transforming characters into an anime style while preserving the original backgrounds. In the The ComfyUI workflow presents a method for creating animations with seamless scene transitions using Prompt Travel (Prompt Schedule). xf yg pt cg wr gi wo so lb ij