App下載

Pytorch訓(xùn)練網(wǎng)絡(luò)過(guò)程中l(wèi)oss突然變?yōu)?的解決方案

一夜奈良山 2021-08-17 15:09:37 瀏覽數(shù) (12700)
反饋

在使用pytorch訓(xùn)練網(wǎng)絡(luò)的過(guò)程中,有時(shí)候會(huì)出現(xiàn)loss變?yōu)?的情況,當(dāng)出現(xiàn)這樣的情況的時(shí)候,我們要怎么解決呢?今天小編帶來(lái)了出現(xiàn)這種情況的解決方案,快來(lái)看一下吧。

問(wèn)題

// loss 突然變成0
python train.py -b=8
INFO: Using device cpu
INFO: Network:
        1 input channels
        7 output channels (classes)
        Bilinear upscaling
INFO: Creating dataset with 868 examples
INFO: Starting training:
        Epochs:          5
        Batch size:      8
        Learning rate:   0.001
        Training size:   782
        Validation size: 86
        Checkpoints:     True
        Device:          cpu
        Images scaling:  1
    
Epoch 1/5:  10%|██████████████▏                                                                                                                            | 80/782 [01:33<13:21,  1.14s/img, loss (batch)=0.886I
NFO: Validation cross entropy: 1.86862473487854                                                                                                                                                                  
Epoch 1/5:  20%|███████████████████████████▊                                                                                                            | 160/782 [03:34<11:51,  1.14s/img, loss (batch)=2.35e-7I
NFO: Validation cross entropy: 5.887489884504049e-10                                                                                                                                                             
Epoch 1/5:  31%|███████████████████████████████████████████▌                                                                                                  | 240/782 [05:41<11:29,  1.27s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  41%|██████████████████████████████████████████████████████████                                                                                    | 320/782 [07:49<09:16,  1.20s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  51%|████████████████████████████████████████████████████████████████████████▋                                                                     | 400/782 [09:55<07:31,  1.18s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  61%|███████████████████████████████████████████████████████████████████████████████████████▏                                                      | 480/782 [12:02<05:58,  1.19s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  72%|█████████████████████████████████████████████████████████████████████████████████████████████████████▋                                        | 560/782 [14:04<04:16,  1.15s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  82%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏                         | 640/782 [16:11<02:49,  1.20s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  92%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋           | 720/782 [18:21<01:18,  1.26s/img, loss (batch)=0I
NFO: Validation cross entropy: 0.0                                                                                                                                                                               
Epoch 1/5:  94%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋        | 736/782 [19:17<01:12,  1.57s/img, loss (batch)=0]
Traceback (most recent call last):
  File "train.py", line 182, in <module>
    val_percent=args.val / 100)
  File "train.py", line 66, in train_net
    for batch in train_loader:
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in __next__
    return self._process_data(data)
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
    data.reraise()
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/_utils.py", line 385, in reraise
    raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 4.
Original Traceback (most recent call last):
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp>
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/public/home/lidd/.conda/envs/lgg2/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: Expected object of scalar type Double but got scalar type Byte for sequence element 4 in sequence argument at position #1 'tensors'

交叉熵?fù)p失函數(shù)是衡量輸出與標(biāo)簽之間的損失,通過(guò)求導(dǎo)確定梯度下降的方向。

loss突然變?yōu)?,有兩種可能性。

一是因?yàn)轭A(yù)測(cè)輸出為0,二是因?yàn)闃?biāo)簽為0。

如果是因?yàn)闃?biāo)簽為0,那么一開始loss就可能為0.

檢查參數(shù)初始化

檢查前向傳播的網(wǎng)絡(luò)

檢查loss的計(jì)算格式

檢查梯度下降

是否出現(xiàn)梯度消失。

實(shí)際上是標(biāo)簽出了錯(cuò)誤

補(bǔ)充:pytorch訓(xùn)練出現(xiàn)loss=na

遇到一個(gè)很坑的情況,在pytorch訓(xùn)練過(guò)程中出現(xiàn)loss=nan的情況

有以下幾種可能:

1.學(xué)習(xí)率太高。

2.loss函數(shù)有問(wèn)題

3.對(duì)于回歸問(wèn)題,可能出現(xiàn)了除0 的計(jì)算,加一個(gè)很小的余項(xiàng)可能可以解決

4.數(shù)據(jù)本身,是否存在Nan、inf,可以用np.isnan(),np.isinf()檢查一下input和target

5.target本身應(yīng)該是能夠被loss函數(shù)計(jì)算的,比如sigmoid激活函數(shù)的target應(yīng)該大于0,同樣的需要檢查數(shù)據(jù)集

以上就是Pytorch在訓(xùn)練網(wǎng)絡(luò)過(guò)程中l(wèi)oss變?yōu)?的解決方案,希望能給大家一個(gè)參考,也希望大家多多支持W3Cschool。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教。


0 人點(diǎn)贊