Skip to content

Commit b90e1f0

Browse files
committed
Add CycleGAN colab notebook
1 parent 43521b0 commit b90e1f0

File tree

1 file changed

+255
-0
lines changed

1 file changed

+255
-0
lines changed

CycleGAN.ipynb

+255
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,255 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"name": "CycleGAN",
7+
"provenance": [],
8+
"collapsed_sections": [],
9+
"include_colab_link": true
10+
},
11+
"kernelspec": {
12+
"name": "python3",
13+
"display_name": "Python 3"
14+
},
15+
"accelerator": "GPU"
16+
},
17+
"cells": [
18+
{
19+
"cell_type": "markdown",
20+
"metadata": {
21+
"id": "view-in-github",
22+
"colab_type": "text"
23+
},
24+
"source": [
25+
"<a href=\"https://colab.research.google.com/github/bkkaggle/pytorch-CycleGAN-and-pix2pix/blob/master/CycleGAN.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
26+
]
27+
},
28+
{
29+
"cell_type": "markdown",
30+
"metadata": {
31+
"id": "5VIGyIus8Vr7",
32+
"colab_type": "text"
33+
},
34+
"source": [
35+
"Take a look at the [repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) for more information"
36+
]
37+
},
38+
{
39+
"cell_type": "markdown",
40+
"metadata": {
41+
"id": "7wNjDKdQy35h",
42+
"colab_type": "text"
43+
},
44+
"source": [
45+
"# Install"
46+
]
47+
},
48+
{
49+
"cell_type": "code",
50+
"metadata": {
51+
"id": "TRm-USlsHgEV",
52+
"colab_type": "code",
53+
"colab": {}
54+
},
55+
"source": [
56+
"!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix"
57+
],
58+
"execution_count": 0,
59+
"outputs": []
60+
},
61+
{
62+
"cell_type": "code",
63+
"metadata": {
64+
"id": "Pt3igws3eiVp",
65+
"colab_type": "code",
66+
"colab": {}
67+
},
68+
"source": [
69+
"import os\n",
70+
"os.chdir('pytorch-CycleGAN-and-pix2pix/')"
71+
],
72+
"execution_count": 0,
73+
"outputs": []
74+
},
75+
{
76+
"cell_type": "code",
77+
"metadata": {
78+
"id": "z1EySlOXwwoa",
79+
"colab_type": "code",
80+
"colab": {}
81+
},
82+
"source": [
83+
"!pip install -r requirements.txt"
84+
],
85+
"execution_count": 0,
86+
"outputs": []
87+
},
88+
{
89+
"cell_type": "markdown",
90+
"metadata": {
91+
"id": "8daqlgVhw29P",
92+
"colab_type": "text"
93+
},
94+
"source": [
95+
"# Datasets\n",
96+
"\n",
97+
"Download one of the official datasets with:\n",
98+
"\n",
99+
"- `bash ./datasets/download_cyclegan_dataset.sh [apple2orange, orange2apple, summer2winter_yosemite, winter2summer_yosemite, horse2zebra, zebra2horse, monet2photo, style_monet, style_cezanne, style_ukiyoe, style_vangogh, sat2map, map2sat, cityscapes_photo2label, cityscapes_label2photo, facades_photo2label, facades_label2photo, iphone2dslr_flower]`\n",
100+
"\n",
101+
"Or use your own dataset by creating the appropriate folders and adding in the images.\n",
102+
"\n",
103+
"- Create a dataset folder under `/dataset` for your dataset.\n",
104+
"- Create subfolders `testA`, `testB`, `trainA`, and `trainB` under your dataset's folder. Place any images you want to transform from a to b (cat2dog) in the `testA` folder, images you want to transform from b to a (dog2cat) in the `testB` folder, and do the same for the `trainA` and `trainB` folders."
105+
]
106+
},
107+
{
108+
"cell_type": "code",
109+
"metadata": {
110+
"id": "vrdOettJxaCc",
111+
"colab_type": "code",
112+
"colab": {}
113+
},
114+
"source": [
115+
"!bash ./datasets/download_cyclegan_dataset.sh horse2zebra"
116+
],
117+
"execution_count": 0,
118+
"outputs": []
119+
},
120+
{
121+
"cell_type": "markdown",
122+
"metadata": {
123+
"id": "gdUz4116xhpm",
124+
"colab_type": "text"
125+
},
126+
"source": [
127+
"# Pretrained models\n",
128+
"\n",
129+
"Download one of the official pretrained models with:\n",
130+
"\n",
131+
"- `bash ./scripts/download_cyclegan_model.sh [apple2orange, orange2apple, summer2winter_yosemite, winter2summer_yosemite, horse2zebra, zebra2horse, monet2photo, style_monet, style_cezanne, style_ukiyoe, style_vangogh, sat2map, map2sat, cityscapes_photo2label, cityscapes_label2photo, facades_photo2label, facades_label2photo, iphone2dslr_flower]`\n",
132+
"\n",
133+
"Or add your own pretrained model to `./checkpoints/{NAME}_pretrained/latest_net_G.pt`"
134+
]
135+
},
136+
{
137+
"cell_type": "code",
138+
"metadata": {
139+
"id": "B75UqtKhxznS",
140+
"colab_type": "code",
141+
"colab": {}
142+
},
143+
"source": [
144+
"!bash ./scripts/download_cyclegan_model.sh horse2zebra"
145+
],
146+
"execution_count": 0,
147+
"outputs": []
148+
},
149+
{
150+
"cell_type": "markdown",
151+
"metadata": {
152+
"id": "yFw1kDQBx3LN",
153+
"colab_type": "text"
154+
},
155+
"source": [
156+
"# Training\n",
157+
"\n",
158+
"- `python train.py --dataroot ./datasets/horse2zebra --name horse2zebra --model cycle_gan`\n",
159+
"\n",
160+
"Change the `--dataroot` and `--name` to your own dataset's path and model's name. Use `--gpu_ids 0,1,..` to train on multiple GPUs and `--batch_size` to change the batch size. I've found that a batch size of 16 fits onto 4 V100s and can finish training an epoch in ~90s.\n",
161+
"\n",
162+
"Once your model has trained, copy over the last checkpoint to a format that the testing model can automatically detect:\n",
163+
"\n",
164+
"Use `cp ./checkpoints/horse2zebra/latest_net_G_A.pth ./checkpoints/horse2zebra/latest_net_G.pth` if you want to transform images from class A to class B and `cp ./checkpoints/horse2zebra/latest_net_G_B.pth ./checkpoints/horse2zebra/latest_net_G.pth` if you want to transform images from class B to class A.\n"
165+
]
166+
},
167+
{
168+
"cell_type": "code",
169+
"metadata": {
170+
"id": "0sp7TCT2x9dB",
171+
"colab_type": "code",
172+
"colab": {}
173+
},
174+
"source": [
175+
"!python train.py --dataroot ./datasets/horse2zebra --name horse2zebra --model cycle_gan"
176+
],
177+
"execution_count": 0,
178+
"outputs": []
179+
},
180+
{
181+
"cell_type": "markdown",
182+
"metadata": {
183+
"id": "9UkcaFZiyASl",
184+
"colab_type": "text"
185+
},
186+
"source": [
187+
"# Testing\n",
188+
"\n",
189+
"- `python test.py --dataroot datasets/horse2zebra/testA --name horse2zebra_pretrained --model test --no_dropout`\n",
190+
"\n",
191+
"Change the `--dataroot` and `--name` to be consistent with your trained model's configuration.\n",
192+
"\n",
193+
"> from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix:\n",
194+
"> The option --model test is used for generating results of CycleGAN only for one side. This option will automatically set --dataset_mode single, which only loads the images from one set. On the contrary, using --model cycle_gan requires loading and generating results in both directions, which is sometimes unnecessary. The results will be saved at ./results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.\n",
195+
"\n",
196+
"> For your own experiments, you might want to specify --netG, --norm, --no_dropout to match the generator architecture of the trained model."
197+
]
198+
},
199+
{
200+
"cell_type": "code",
201+
"metadata": {
202+
"id": "uCsKkEq0yGh0",
203+
"colab_type": "code",
204+
"colab": {}
205+
},
206+
"source": [
207+
"!python test.py --dataroot datasets/horse2zebra/testA --name horse2zebra_pretrained --model test --no_dropout"
208+
],
209+
"execution_count": 0,
210+
"outputs": []
211+
},
212+
{
213+
"cell_type": "markdown",
214+
"metadata": {
215+
"id": "OzSKIPUByfiN",
216+
"colab_type": "text"
217+
},
218+
"source": [
219+
"# Visualize"
220+
]
221+
},
222+
{
223+
"cell_type": "code",
224+
"metadata": {
225+
"id": "9Mgg8raPyizq",
226+
"colab_type": "code",
227+
"colab": {}
228+
},
229+
"source": [
230+
"import matplotlib.pyplot as plt\n",
231+
"\n",
232+
"img = plt.imread('./results/horse2zebra_pretrained/test_latest/images/n02381460_1010_fake.png')\n",
233+
"plt.imshow(img)"
234+
],
235+
"execution_count": 0,
236+
"outputs": []
237+
},
238+
{
239+
"cell_type": "code",
240+
"metadata": {
241+
"id": "0G3oVH9DyqLQ",
242+
"colab_type": "code",
243+
"colab": {}
244+
},
245+
"source": [
246+
"import matplotlib.pyplot as plt\n",
247+
"\n",
248+
"img = plt.imread('./results/horse2zebra_pretrained/test_latest/images/n02381460_1010_real.png')\n",
249+
"plt.imshow(img)"
250+
],
251+
"execution_count": 0,
252+
"outputs": []
253+
}
254+
]
255+
}

0 commit comments

Comments
 (0)