Please use this identifier to cite or link to this item:
Title: Convolutional encoder-decoder network using transfer learning for topology optimization
Authors: Ateş, Görkem Can
Görgülüarslan, Recep M.
Keywords: Deep learning
Transfer learning
VGG network
Topology optimization
Semantic Segmentation
Issue Date: 2023
Publisher: Springer London Ltd
Abstract: State-of-the-art deep neural networks have achieved great success as an alternative to topology optimization by eliminating the iterative framework of the optimization process. However, models with strong predicting capabilities require massive data, which can be time-consuming, particularly for high-resolution structures. Transfer learning from pre-trained networks has shown promise in enhancing network performance on new tasks with a smaller amount of data. In this study, a U-net-based deep convolutional encoder-decoder network was developed for predicting high-resolution (256 x 256) optimized structures using transfer learning and fine-tuning for topology optimization. Initially, the VGG16 network pre-trained on ImageNet was employed as the encoder for transfer learning. Subsequently, the decoder was constructed from scratch and the network was trained in two steps. Finally, the results of models employing transfer learning and those trained entirely from scratch were compared across various core parameters, including different initial input iterations, fine-tuning epoch numbers, and dataset sizes. Our findings demonstrate that the utilization of transfer learning from the ImageNet pre-trained VGG16 network as the encoder can improve the final predicting performance and alleviate structural discontinuity issues in some cases while reducing training time.
ISSN: 0941-0643
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection

Show full item record

CORE Recommender

Google ScholarTM



Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.