Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations

With the rapid development of artificial intelligence technology, recommendation systems have been widely applied in various fields. However, in the art field, art similarity search and recommendation systems face unique challenges, namely data privacy and copyright protection issues. To address the...

Full description

Bibliographic Details
Published in:PeerJ Computer Science
Main Author: Gong B.; Mahsan I.P.; Xiao J.
Format: Article
Language:English
Published: PeerJ Inc. 2024
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85210759435&doi=10.7717%2fpeerj-cs.2405&partnerID=40&md5=7418e8fcf0f994d59a2204bd6441ef4e
id 2-s2.0-85210759435
spelling 2-s2.0-85210759435
Gong B.; Mahsan I.P.; Xiao J.
Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
2024
PeerJ Computer Science
10

10.7717/peerj-cs.2405
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85210759435&doi=10.7717%2fpeerj-cs.2405&partnerID=40&md5=7418e8fcf0f994d59a2204bd6441ef4e
With the rapid development of artificial intelligence technology, recommendation systems have been widely applied in various fields. However, in the art field, art similarity search and recommendation systems face unique challenges, namely data privacy and copyright protection issues. To address these problems, this article proposes a cross-institutional artwork similarity search and recommendation system (AI-based Collaborative Recommendation System (AICRS) framework) that combines multimodal data fusion and federated learning. This system uses pre-trained convolutional neural networks (CNN) and Bidirectional Encoder Representation from Transformers (BERT) models to extract features from image and text data. It then uses a federated learning framework to train models locally at each participating institution and aggregate parameters to optimize the global model. Experimental results show that the AICRS framework achieves a final accuracy of 92.02% on the SemArt dataset, compared to 81.52% and 83.44% for traditional CNN and Long Short-Term Memory (LSTM) models, respectively. The final loss value of the AICRS framework is 0.1284, which is better than the 0.248 and 0.188 of CNN and LSTM models. The research results of this article not only provide an effective technical solution but also offer strong support for the recommendation and protection of artworks in practice. © 2024 Gong et al.
PeerJ Inc.
23765992
English
Article
All Open Access; Gold Open Access
author Gong B.; Mahsan I.P.; Xiao J.
spellingShingle Gong B.; Mahsan I.P.; Xiao J.
Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
author_facet Gong B.; Mahsan I.P.; Xiao J.
author_sort Gong B.; Mahsan I.P.; Xiao J.
title Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
title_short Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
title_full Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
title_fullStr Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
title_full_unstemmed Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
title_sort Federated learning-driven collaborative recommendation system for multi-modal art analysis and enhanced recommendations
publishDate 2024
container_title PeerJ Computer Science
container_volume 10
container_issue
doi_str_mv 10.7717/peerj-cs.2405
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85210759435&doi=10.7717%2fpeerj-cs.2405&partnerID=40&md5=7418e8fcf0f994d59a2204bd6441ef4e
description With the rapid development of artificial intelligence technology, recommendation systems have been widely applied in various fields. However, in the art field, art similarity search and recommendation systems face unique challenges, namely data privacy and copyright protection issues. To address these problems, this article proposes a cross-institutional artwork similarity search and recommendation system (AI-based Collaborative Recommendation System (AICRS) framework) that combines multimodal data fusion and federated learning. This system uses pre-trained convolutional neural networks (CNN) and Bidirectional Encoder Representation from Transformers (BERT) models to extract features from image and text data. It then uses a federated learning framework to train models locally at each participating institution and aggregate parameters to optimize the global model. Experimental results show that the AICRS framework achieves a final accuracy of 92.02% on the SemArt dataset, compared to 81.52% and 83.44% for traditional CNN and Long Short-Term Memory (LSTM) models, respectively. The final loss value of the AICRS framework is 0.1284, which is better than the 0.248 and 0.188 of CNN and LSTM models. The research results of this article not only provide an effective technical solution but also offer strong support for the recommendation and protection of artworks in practice. © 2024 Gong et al.
publisher PeerJ Inc.
issn 23765992
language English
format Article
accesstype All Open Access; Gold Open Access
record_format scopus
collection Scopus
_version_ 1820775438842068992