2019年遙感圖像稀疏表征與智能分析競賽-語義分割組


主題一:遙感圖像場景分類

遙感圖像場景分類旨在對空間信息網絡中的遙感圖像進行場景級內容解譯,並為每一幅遙感圖像賦予場景類別標簽。本項競賽以包含典型場景的遙感圖像為處理對象,參賽隊伍使用主辦方提供的數據對指定的遙感圖像進行場景分類,主辦方依據評分標准對遙感圖像場景分類結果進行綜合評價。

賽題詳情:http://rscup.bjxintong.com.cn/#/theme/1          場景分類:https://github.com/vicwer/sense_classification

主題二:遙感圖像目標檢測

遙感圖像目標檢測識別競賽即利用算法模型對遙感圖像中的一個或多個目標的類別和位置進行自動化判定與識別。本項競賽以包含典型地物目標的遙感圖像為處理對象,參賽隊伍使用主辦方提供的圖像進行帶方向的目標檢測與識別處理,主辦方依據評分標准對檢測識別結果進行綜合評價。

賽題詳情:http://rscup.bjxintong.com.cn/#/theme/2

主題三:遙感圖像語義分割

遙感圖像語義分割競賽即利用遙感圖像中各類地物光譜信息和空間信息進行分析,將圖像中具有語義信息的各個像元分別賦予語義類別標簽。本項競賽以包含典型土地利用分類的光學遙感圖像為處理對象,參賽隊伍使用主辦方提供的遙感圖像進行土地利用類型語義分割處理,主辦方依據評分標准對檢測識別結果進行綜合評價。

賽題詳情:http://rscup.bjxintong.com.cn/#/theme/3          語義分割:https://github.com/huiyiygy/rssrai2019_semantic_segmentation

主題四:遙感圖像變化檢測

遙感圖像變化檢測即利用多時相的遙感數據分析並確定地表覆蓋變化的特征與過程,將多時相圖像中隨時間變化的像元賦予變化語義類別標簽。本項競賽以光學遙感圖像為處理對象,參賽隊伍使用主辦方提供的遙感圖像進行建築物變化檢測,主辦方根據評分標准對變化檢測結果進行綜合評價。

賽題詳情:http://rscup.bjxintong.com.cn/#/theme/4

主題五:遙感視頻目標跟蹤

光學遙感視頻衛星目標自動跟蹤競賽即利用算法模型對一段光學遙感衛星視頻中的一個目標的位置進行自動化識別與判定,並以矩形邊界框的形式進行標識。本項競賽以包含典型運動目標的光學遙感衛星視頻為處理對象,視頻以時間上連續的多幀圖像的方式提供,參賽隊伍使用主辦方提供的光學遙感衛星視頻進行運動目標自動跟蹤處理,主辦方依據評分標准對自動跟蹤結果進行綜合評價。

賽題詳情:http://rscup.bjxintong.com.cn/#/theme/5

 https://www.sohu.com/a/322434729_772793

大賽官網:http://rscup.bjxintong.com.cn/#/theme/3/

主題一和主題三跟這次比較像,尤其是三

相同之處:都是圖像分割

不同之處:圖像尺寸不同,個數也不同

# -*- coding:utf-8 -*-
"""
@function:
@author:HuiYi or 會意
@file: vis.py.py
@time: 2019/6/23 下午7:00
"""
import argparse
import os
import numpy as np
import torch
from tqdm import tqdm

from mypath import Path
from utils.saver import Saver
from utils.summaries import TensorboardSummary
from dataloaders import make_data_loader
from models.backbone.UNet import UNet
from models.backbone.UNetNested import UNetNested
from utils.calculate_weights import calculate_weigths_labels
from utils.loss import SegmentationLosses
from utils.metrics import Evaluator
# from utils.lr_scheduler import LR_Scheduler
from models.sync_batchnorm.replicate import patch_replication_callback


class Trainer(object):
    def __init__(self, args):
        self.args = args

        # Define Saver
        self.saver = Saver(args)
        self.saver.save_experiment_config()
        # Define Tensorboard Summary
        self.summary = TensorboardSummary(self.saver.experiment_dir)
        self.writer = self.summary.create_summary()

        # Define Dataloader
        kwargs = {'num_workers': args.workers, 'pin_memory': True}
        self.train_loader, self.val_loader, self.test_loader, self.nclass = make_data_loader(args, **kwargs)

        model = None
        # Define network
        if self.args.backbone == 'unet':
            model = UNet(in_channels=4, n_classes=self.nclass, sync_bn=args.sync_bn)
            print("using UNet")
        if self.args.backbone == 'unetNested':
            model = UNetNested(in_channels=4, n_classes=self.nclass, sync_bn=args.sync_bn)
            print("using UNetNested")

        # train_params = [{'params': model.get_params(), 'lr': args.lr}]
        train_params = [{'params': model.get_params()}]

        # Define Optimizer
        # optimizer = torch.optim.SGD(train_params, momentum=args.momentum,
        #                             weight_decay=args.weight_decay, nesterov=args.nesterov)
        optimizer = torch.optim.Adam(train_params, self.args.learn_rate, weight_decay=args.weight_decay, amsgrad=True)

        # Define Criterion
        # whether to use class balanced weights
        if args.use_balanced_weights:
            classes_weights_path = os.path.join(Path.db_root_dir(args.dataset), args.dataset + '_classes_weights.npy')
            if os.path.isfile(classes_weights_path):
                weight = np.load(classes_weights_path)
            else:
                weight = calculate_weigths_labels(args.dataset, self.train_loader, self.nclass)
            weight = torch.from_numpy(weight.astype(np.float32))
        else:
            weight = None
        self.criterion = SegmentationLosses(weight=weight, cuda=args.cuda).build_loss(mode=args.loss_type)
        self.model, self.optimizer = model, optimizer

        # Define Evaluator
        self.evaluator = Evaluator(self.nclass)
        # Define lr scheduler
        # self.scheduler = LR_Scheduler(args.lr_scheduler, args.lr, args.epochs, len(self.train_loader))

        # Using cuda
        if args.cuda:
            self.model = torch.nn.DataParallel(self.model, device_ids=self.args.gpu_ids)
            patch_replication_callback(self.model)
            self.model = self.model.cuda()

        # Resuming checkpoint
        self.best_pred = 0.0
        if args.resume is not None:
            if not os.path.isfile(args.resume):
                raise RuntimeError("=> no checkpoint found at '{}'" .format(args.resume))
            checkpoint = torch.load(args.resume)
            args.start_epoch = checkpoint['epoch']
            if args.cuda:
                self.model.module.load_state_dict(checkpoint['state_dict'])
            else:
                self.model.load_state_dict(checkpoint['state_dict'])
            if not args.ft:
                self.optimizer.load_state_dict(checkpoint['optimizer'])
            self.best_pred = checkpoint['best_pred']
            print("=> loaded checkpoint '{}' (epoch {})"
                  .format(args.resume, checkpoint['epoch']))

        # Clear start epoch if fine-tuning
        if args.ft:
            args.start_epoch = 0

    def training(self, epoch):
        print('[Epoch: %d, learning rate: %.6f, previous best = %.4f]' % (epoch, self.args.learn_rate, self.best_pred))
        train_loss = 0.0
        self.model.train()
        self.evaluator.reset()
        tbar = tqdm(self.train_loader)
        num_img_tr = len(self.train_loader)

        for i, sample in enumerate(tbar):
            image, target = sample['image'], sample['label']
            if self.args.cuda:
                image, target = image.cuda(), target.cuda()
            # self.scheduler(self.optimizer, i, epoch, self.best_pred)
            self.optimizer.zero_grad()
            output = self.model(image)
            loss = self.criterion(output, target)
            loss.backward()
            self.optimizer.step()
            train_loss += loss.item()
            tbar.set_description('Train loss: %.5f' % (train_loss / (i + 1)))
            self.writer.add_scalar('train/total_loss_iter', loss.item(), i + num_img_tr * epoch)

        pred = output.data.cpu().numpy()
        target = target.cpu().numpy()
        pred = np.argmax(pred, axis=1)
        # Add batch sample into evaluator
        self.evaluator.add_batch(target, pred)

        # Fast test during the training
        Acc = self.evaluator.Pixel_Accuracy()
        Acc_class = self.evaluator.Pixel_Accuracy_Class()
        mIoU = self.evaluator.Mean_Intersection_over_Union()
        FWIoU = self.evaluator.Frequency_Weighted_Intersection_over_Union()
        self.writer.add_scalar('train/mIoU', mIoU, epoch)
        self.writer.add_scalar('train/Acc', Acc, epoch)
        self.writer.add_scalar('train/Acc_class', Acc_class, epoch)
        self.writer.add_scalar('train/fwIoU', FWIoU, epoch)
        self.writer.add_scalar('train/total_loss_epoch', train_loss, epoch)

        print('train validation:')
        print("Acc:{}, Acc_class:{}, mIoU:{}, fwIoU: {}".format(Acc, Acc_class, mIoU, FWIoU))
        print('Loss: %.3f' % train_loss)
        print('---------------------------------')

    def validation(self, epoch):
        test_loss = 0.0
        self.model.eval()
        self.evaluator.reset()
        tbar = tqdm(self.val_loader, desc='\r')
        num_img_val = len(self.val_loader)

        for i, sample in enumerate(tbar):
            image, target = sample['image'], sample['label']
            if self.args.cuda:
                image, target = image.cuda(), target.cuda()
            with torch.no_grad():
                output = self.model(image)
            loss = self.criterion(output, target)
            test_loss += loss.item()
            tbar.set_description('Test loss: %.5f' % (test_loss / (i + 1)))
            self.writer.add_scalar('val/total_loss_iter', loss.item(), i + num_img_val * epoch)
            pred = output.data.cpu().numpy()
            target = target.cpu().numpy()
            pred = np.argmax(pred, axis=1)
            # Add batch sample into evaluator
            self.evaluator.add_batch(target, pred)

        # Fast test during the training
        Acc = self.evaluator.Pixel_Accuracy()
        Acc_class = self.evaluator.Pixel_Accuracy_Class()
        mIoU = self.evaluator.Mean_Intersection_over_Union()
        FWIoU = self.evaluator.Frequency_Weighted_Intersection_over_Union()
        self.writer.add_scalar('val/total_loss_epoch', test_loss, epoch)
        self.writer.add_scalar('val/mIoU', mIoU, epoch)
        self.writer.add_scalar('val/Acc', Acc, epoch)
        self.writer.add_scalar('val/Acc_class', Acc_class, epoch)
        self.writer.add_scalar('val/fwIoU', FWIoU, epoch)
        print('test validation:')
        print("Acc:{}, Acc_class:{}, mIoU:{}, fwIoU: {}".format(Acc, Acc_class, mIoU, FWIoU))
        print('Loss: %.3f' % test_loss)
        print('====================================')

        new_pred = mIoU
        if new_pred > self.best_pred:
            is_best = True
            self.best_pred = new_pred
            self.saver.save_checkpoint({
                'epoch': epoch + 1,
                'state_dict': self.model.module.state_dict(),
                'optimizer': self.optimizer.state_dict(),
                'best_pred': self.best_pred,
            }, is_best)


def main():
    parser = argparse.ArgumentParser(description="PyTorch Unet Training")
    parser.add_argument('--backbone', type=str, default='unet',
                        choices=['unet', 'unetNested'],
                        help='backbone name (default: unet)')
    parser.add_argument('--dataset', type=str, default='rssrai2019',
                        choices=['rssrai2019'],
                        help='dataset name (default: rssrai2019)')
    parser.add_argument('--workers', type=int, default=4,
                        metavar='N', help='dataloader threads')
    parser.add_argument('--base-size', type=int, default=400,
                        help='base image size')
    parser.add_argument('--crop-size', type=int, default=400,
                        help='crop image size')
    parser.add_argument('--sync-bn', type=bool, default=None,
                        help='whether to use sync bn (default: auto)')
    parser.add_argument('--freeze-bn', type=bool, default=False,
                        help='whether to freeze bn parameters (default: False)')
    parser.add_argument('--loss-type', type=str, default='ce',
                        choices=['ce', 'focal'],
                        help='loss func type (default: ce)')
    # training hyper params
    parser.add_argument('--epochs', type=int, default=None, metavar='N',
                        help='number of epochs to train (default: auto)')
    parser.add_argument('--start_epoch', type=int, default=0, metavar='N',
                        help='start epochs (default:0)')
    parser.add_argument('--batch-size', type=int, default=None, metavar='N',
                        help='input batch size for training (default: auto)')
    parser.add_argument('--test-batch-size', type=int, default=None, metavar='N',
                        help='input batch size for testing (default: auto)')
    parser.add_argument('--use-balanced-weights', action='store_true', default=False,
                        help='whether to use balanced weights (default: False)')
    # optimizer params
    parser.add_argument('--learn-rate', type=float, default=None, metavar='LR',
                        help='learning rate (default: auto)')
    parser.add_argument('--lr-scheduler', type=str, default='poly',
                        choices=['poly', 'step', 'cos'],
                        help='lr scheduler mode: (default: poly)')
    parser.add_argument('--momentum', type=float, default=0.9,
                        metavar='M', help='momentum (default: 0.9)')
    parser.add_argument('--weight-decay', type=float, default=5e-4,
                        metavar='M', help='w-decay (default: 5e-4)')
    parser.add_argument('--nesterov', action='store_true', default=True,
                        help='whether use nesterov (default: False)')
    # cuda, seed and logging
    parser.add_argument('--no-cuda', action='store_true', default=False,
                        help='disables CUDA training')
    parser.add_argument('--gpu-ids', type=str, default='0',
                        help='use which gpu to train, must be a comma-separated list of integers only (default=0)')
    parser.add_argument('--seed', type=int, default=1, metavar='S',
                        help='random seed (default: 1)')
    # checking point
    parser.add_argument('--resume', type=str, default=None,
                        help='put the path to resuming file if needed')
    parser.add_argument('--checkname', type=str, default=None,
                        help='set the checkpoint name')
    # finetuning pre-trained models
    parser.add_argument('--ft', action='store_true', default=False,
                        help='finetuning on a different dataset')
    # evaluation option
    parser.add_argument('--eval-interval', type=int, default=1,
                        help='evaluation interval (default: 1)')
    parser.add_argument('--no-val', action='store_true', default=False,
                        help='skip validation during training')

    args = parser.parse_args()
    args.cuda = not args.no_cuda and torch.cuda.is_available()
    if args.cuda:
        try:
            args.gpu_ids = [int(s) for s in args.gpu_ids.split(',')]
        except ValueError:
            raise ValueError('Argument --gpu_ids must be a comma-separated list of integers only')

    if args.sync_bn is None:
        if args.cuda and len(args.gpu_ids) > 1:
            args.sync_bn = True
        else:
            args.sync_bn = False

    # default settings for epochs, batch_size and lr
    if args.epochs is None:
        epoches = {'rssrai2019': 100}
        args.epochs = epoches[args.dataset.lower()]

    if args.batch_size is None:
        args.batch_size = 4 * len(args.gpu_ids)

    if args.test_batch_size is None:
        args.test_batch_size = args.batch_size

    if args.learn_rate is None:
        lrs = {'rssrai2019': 0.01}
        args.learn_rate = lrs[args.dataset.lower()] / (4 * len(args.gpu_ids)) * args.batch_size

    if args.checkname is None:
        args.checkname = str(args.backbone)

    print(args)
    torch.manual_seed(args.seed)
    trainer = Trainer(args)
    print('Starting Epoch:', trainer.args.start_epoch)
    print('Total Epoches:', trainer.args.epochs)
    print('====================================')
    for epoch in range(trainer.args.start_epoch, trainer.args.epochs):
        trainer.training(epoch)
        if epoch % args.eval_interval == (args.eval_interval - 1):
            trainer.validation(epoch)

    trainer.writer.close()


if __name__ == "__main__":
    main()

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM