Currently, there are many AI platforms available on the market for users with AI
computing services, among which Google Colab is the most well-known one. However, due to some limitations, it still cannot meet the regular training needs of AI
fans and researchers. For example, the longest running time for Google Colab is 12 hours, even for the Pro edition, the running time can only last 24 hours. Also, the tasks are run on GPUs like K80, P100, T4, etc.
As it is widely known, AI training takes a tremendously long time, which highly relies on the performance of the GPUs. AlphaFold2 which is developed by Google DeepMind requires A100 to run tasks.
Cloudam HPC provides users with cost-effective cloud-HPC services for AI/ML, on which users can submit jobs as they prefer. You can submit jobs with command line, or by using Jupyter Notebook/Jupyter Lab to deploy your codes and check data.
AI-generated Painting Goes Viral
Disco Diffusion, an AIGC tool based on CLIP-Guided Diffusion, is all over the internet these days. It can generate a visually pleasing picture by just inputting one sentence. (For detailed technical analysis: https://arxiv.org/abs/2105.05233)
In this article, the main focus is to tell you how to run Diffusion on Cloudam. Before the tutorial, let's see the amazing pictures we generated. Below every picture is the input keywords.
Keywords: high performance computing, cloud,scientist, drug, time, future, cyberpunk
Keywords: artstation, Greg Rutkowski, sea, dikel, ship, industrialization, cloud, time, future, afternoon
If you want a picture with a new style, you can change the keywords in Jupyter Notebook. In this tutorial, the GPU we chose is NVIDIA T4, the image resolution is 1280*768, and other setups are default. It takes about 15 minutes to make an image which is 6 times faster than Google Colab.
Hands-on Disco Diffusion
First of all, start a Jupyter Notebook Desktop with an NVIDIA T4 graphic card, and open it when it is all set.
After it, open the terminal and copy the notebook (Disco_Diffusion.ipynb) and paste it locally.
git clone - https://github.com/alembics/disco-diffusion.git
Since the project needs to run on PyTorch which requires installations on multiple libraries, it is recommended to use Anaconda which is pre-installed on Cloudam, and can be executed by the command line below
module add Anaconda3 source activate
We can build an independent environment to use diffusion, select the 3.9 version of Python, and add the environment to ipykernel.
conda create -n diffusion python=3.9 conda activate conda install -c anaconda ipykernel python -m ipykernel install --user --name=diffusion
Then, we can open Disco_Diffusion.ipynb, and choose diffusion as kernel.
Run the Notebook with 4 steps: build the environment, set models, generate text setup, and generate a picture.
Step 1: Build the environment
The first unit is to check the local CPU
In the second unit, there will be a notification saying Google Colab is not detected and the model will choose the local models file.
In the third unit, the dependency packages need to be tested, the uninstalled ones can be installed via conda. Anyways, this step will take some time.
Below are the package and commands installed:
conda install -c pytorch pytorch torchvision torchaudio cudatoolkit=10.2 conda install -c conda-forge opencv timm lpips ftfy einops omegaconf pandas
The latter 3 units define some approaches and models which can be run directly.
Step 2: Set models
Please notice that the default model is 512*512, which is "GPU-gobbling", so you can set it to 256*256.
Then comes the model setting. "batch_name" is the name of the file, and the picture will be named accordingly.
"width_height" is the expected size of the picture, which should be set as multiples of 64px. The minimum size on the default CLIP model is 512px. If you forget to set it as one of the multiples of 64px, Disco Diffusion will adjust it automatically.
"step" means the steps of iteration, the higher the number, the more details in the picture.
The other 2 settings "Animation Settings" and "Extra Settings" can be left unchanged at this stage.
Step 3: Text setting
Text setting is the most important part. 'text_prompts' means phrases. The number 0 is the first frame if you'd like to set the starting frame for animation, hence, you can set the number as 1 if you just need one still image. 'image_prompts' means the input image based on which AI generates, which is quite interesting to experiment with.
How to create an appealing image by setting up with text surely is like creating an art piece. However, an image can be created with a few keywords. Of course, you can add the name of artists, time, locations, etc.
The caption of the official example says: " A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation."
For more 2000+ creations: https://docs.google.com/spreadsheets/d/14xTqtuV3BuKDNhLotB_d1aFlBGnDJOY0BRXJ8-86GpA/edit#gid=0
Step 4: Diffuse!
Now comes the most exciting part - Diffuse! 'n_batches' means the number of the picture you want to generate. The default setting with 50 pics is going to take a long time, so you can change the number to 1 to run a test first. When you click it to run, the picture will be getting clearer and clearer gradually
In this process, you can view the usage rate of GPU in the terminal via nvidia-smi
Finally, you can download the picture you generate in the Storage section in Cloudam after the computation completes.
Cloudam is a one-stop cloud-HPC platform with 300+ pre-installed to deploy immediately. The system can smartly schedule compute nodes and dynamically schedule the software licenses, optimizing workflow and boosting efficiency for engineers and researchers.
Partnered with AWS, Azure, Google Cloud, Oracle Cloud, etc., Cloudam powers your R&D with massive cloud resources without queuing.
You can submit jobs with intuitive templates, SLURM, and Windows/Linux workstations. Whether you are a beginner or a professional, you can always find it handy to run and manage your jobs.
There is a $30 Free Trial for every new user. Why not register and boost your R&D NOW?