Published in

IOP Publishing, IOP Conference Series: Materials Science and Engineering, 1(561), p. 012107, 2019

DOI: 10.1088/1757-899x/561/1/012107

Institute of Electrical and Electronics Engineers, IEEE Transactions on Medical Imaging, 4(36), p. 994-1004, 2017

DOI: 10.1109/tmi.2016.2642839

Links

Tools

Export citation

Search in Google Scholar

Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Abstract Automated melanoma recognition using image processing technique from the available dermoscopic images in deep learning is difficult task because of the contrast and variation of melanoma in skin. It is mainly a non-invasive method so that it cannot contact with skin more forcefully. To overcome these disadvantages this research work proposes a method using very deep convolutional neural networks (CNNs). For more accurate classification in this method we are using FCRN and CNN with the effective training limited data. Initially, Performance of Segmentation is done using residual networks using a image from the dataset followed by Classification by neural networks to check the abnormalities in skin. In this kind of classification technique the network has more specified features from the segmented portion alone. The proposed technique is mainly evaluated on datasets and experimental results that would show the performance in histogram and PSNR ratio.