It looks like you're offline.
Open Library logo
additional options menu

MARC record from Internet Archive

LEADER: 08914cam 2200901Mi 4500
001 on1004987636
003 OCoLC
005 20201001021953.0
008 170831s2017 sz a ob 001 0 eng d
006 m o d
007 cr nn||mamaa||
040 $aAZU$beng$erda$epn$cAZU$dUPM$dYDX$dN$T$dEBLCP$dDKU$dGW5XE$dUAB$dOCLCF$dMERER$dGZM$dIOG$dOCLCO$dCOO$dOCL$dOCLCQ$dOH1$dOCL$dU3W$dCAUOI$dVT2$dAU@$dWYU$dOCLCQ$dUKMGB$dUKAHL$dOCLCQ$dERF$dOCLCQ$dSRU
015 $aGBB8N8026$2bnb
016 7 $a019166403$2Uk
019 $a1003116902$a1003195261$a1004846260$a1012473171$a1036280879$a1036297399$a1058293451$a1066463880$a1066581567$a1087427986$a1100875372
020 $a9783319618074
020 $a3319618075
020 $z3319618067
020 $z9783319618067
024 7 $a10.1007/978-3-319-61807-4$2doi
035 $a(OCoLC)1004987636$z(OCoLC)1003116902$z(OCoLC)1003195261$z(OCoLC)1004846260$z(OCoLC)1012473171$z(OCoLC)1036280879$z(OCoLC)1036297399$z(OCoLC)1058293451$z(OCoLC)1066463880$z(OCoLC)1066581567$z(OCoLC)1087427986$z(OCoLC)1100875372
037 $acom.springer.onix.9783319618074$bSpringer Nature
050 4 $aQA76.9.D343
072 7 $aCOM$x000000$2bisacsh
072 7 $aPSAN$2bicssc
082 04 $a612.8$223
100 1 $aShah, Rajiv,$eauthor.
245 10 $aMultimodal Analysis of User-Generated Multimedia Content /$cby Rajiv Shah, Roger Zimmermann.
264 1 $aCham :$bSpringer International Publishing :$bImprint :$bSpringer,$c2017.
300 $a1 online resource (xxii, 263 pages 63 illustrations, 42 illustrations in color)
336 $atext$btxt$2rdacontent
337 $acomputer$bc$2rdamedia
338 $aonline resource$bcr$2rdacarrier
347 $atext file$bPDF$2rda
490 1 $aSocio-Affective Computing,$x2509-5706 ;$v6
520 $aThis book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics problems from social media such as event detection and summarization, tag recommendation and ranking, soundtrack recommendation, lecture video segmentation, and news video uploading. Readers will discover how the derived knowledge structures from multimodal information are beneficial for efficient multimedia search, retrieval, and recommendation. However, real-world UGC is complex, and extracting the semantics and sentics from only multimedia content is very difficult because suitable concepts may be exhibited in different representations. Moreover, due to the increasing popularity of social media websites and advancements in technology, it is now possible to collect a significant amount of important contextual information (e.g., spatial, temporal, and preferential information). Thus, there is a need to analyze the information of UGC from multiple modalities to address these problems. A discussion of multimodal analysis is presented followed by studies on how multimodal information is exploited to address problems that have a significant impact on different areas of society (e.g., entertainment, education, and journalism). Specifically, the methods presented exploit the multimedia content (e.g., visual content) and associated contextual information (e.g., geo-, temporal, and other sensory data). The reader is introduced to several knowledge bases and fusion techniques to address these problems. This work includes future directions for several interesting multimedia analytics problems that have the potential to significantly impact society. The work is aimed at researchers in the multimedia field who would like to pursue research in the area of multimodal analysis of UGC.
504 $aIncludes bibliographical references and index.
505 0 $aDedication; Foreword; Preface; Acknowledgements; Contents; About the Authors; Abbreviations; Chapter 1: Introduction; 1.1 Background and Motivation; 1.2 Overview; 1.2.1 Event Understanding; 1.2.2 Tag Recommendation and Ranking; 1.2.3 Soundtrack Recommendation for UGVs; 1.2.4 Automatic Lecture Video Segmentation; 1.2.5 Adaptive News Video Uploading; 1.3 Contributions; 1.3.1 Event Understanding; 1.3.2 Tag Recommendation and Ranking; 1.3.3 Soundtrack Recommendation for UGVs; 1.3.4 Automatic Lecture Video Segmentation; 1.3.5 Adaptive News Video Uploading; 1.4 Knowledge Bases and APIs.
505 8 $a1.4.1 FourSquare1.4.2 Semantics Parser; 1.4.3 SenticNet; 1.4.4 WordNet; 1.4.5 Stanford POS Tagger; 1.4.6 Wikipedia; 1.5 Roadmap; References; Chapter 2: Literature Review; 2.1 Event Understanding; 2.2 Tag Recommendation and Ranking; 2.3 Soundtrack Recommendation for UGVs; 2.4 Lecture Video Segmentation; 2.5 Adaptive News Video Uploading; References; Chapter 3: Event Understanding; 3.1 Introduction; 3.2 System Overview; 3.2.1 EventBuilder; 3.2.2 EventSensor; 3.3 Evaluation; 3.3.1 EventBuilder; 3.3.2 EventSensor; 3.4 Summary; References; Chapter 4: Tag Recommendation and Ranking.
505 8 $a4.1 Introduction4.1.1 Tag Recommendation; 4.1.2 Tag Ranking; 4.2 System Overview; 4.2.1 Tag Recommendation; 4.2.2 Tag Ranking; 4.3 Evaluation; 4.3.1 Tag Recommendation; 4.3.2 Tag Ranking; 4.4 Summary; References; Chapter 5: Soundtrack Recommendation for UGVs; 5.1 Introduction; 5.2 Music Video Generation; 5.2.1 Scene Moods Prediction Models; 5.2.1.1 Geo and Visual Features; 5.2.1.2 Scene Moods Classification Model; 5.2.1.3 Scene Moods Recognition; 5.2.2 Music Retrieval Techniques; 5.2.2.1 Heuristic Method for Soundtrack Retrieval; 5.2.2.2 Post-Filtering with User Preferences.
505 8 $a5.2.3 Automatic Music Video Generation Model5.3 Evaluation; 5.3.1 Dataset and Experimental Settings; 5.3.1.1 Emotion Tag Space; 5.3.1.2 GeoVid Dataset; 5.3.1.3 Soundtrack Dataset; 5.3.1.4 Evaluation Dataset; 5.3.2 Experimental Results; 5.3.2.1 Scene Moods Prediction Accuracy; 5.3.2.2 Soundtrack Selection Accuracy; 5.3.3 User Study; 5.4 Summary; References; Chapter 6: Lecture Video Segmentation; 6.1 Introduction; 6.2 Lecture Video Segmentation; 6.2.1 Prediction of Video Transition Cues Using Supervised Learning; 6.2.2 Computation of Text Transition Cues Using -Gram Based Language Model.
505 8 $a6.2.2.1 Preparation6.2.2.2 Title/Sub-Title Text Extraction; 6.2.2.3 Transition Time Recommendation from SRT File; 6.2.3 Computation of SRT Segment Boundaries Using a Linguistic-Based Approach; 6.2.4 Computation of Wikipedia Segment Boundaries; 6.2.5 Transition File Generation; 6.3 Evaluation; 6.3.1 Dataset and Experimental Settings; 6.3.2 Results from the ATLAS System; 6.3.3 Results from the TRACE System; 6.4 Summary; References; Chapter 7: Adaptive News Video Uploading; 7.1 Introduction; 7.2 Adaptive News Video Uploading; 7.2.1 NEWSMAN Scheduling Algorithm; 7.2.2 Rate-Distortion (R-D) Model.
650 0 $aMultimedia data mining.
650 0 $aUser-generated content.
650 0 $aSocial media.
650 0 $aMultimedia systems.
650 0 $aData mining.
650 7 $aData mining.$2bicssc
650 7 $aSemantics, discourse analysis, etc.$2bicssc
650 7 $aCognition & cognitive psychology.$2bicssc
650 7 $aNeurosciences.$2bicssc
650 7 $aCOMPUTERS$xGeneral.$2bisacsh
650 7 $aUser-generated content.$2fast$0(OCoLC)fst01743487
650 7 $aSocial media.$2fast$0(OCoLC)fst01741098
650 7 $aMultimedia systems.$2fast$0(OCoLC)fst01028920
650 7 $aData mining.$2fast$0(OCoLC)fst00887946
650 7 $aMultimedia data mining.$2fast$0(OCoLC)fst01982691
655 0 $aElectronic books.
655 4 $aElectronic books.
700 1 $aZimmermann, Roger,$eauthor.
776 08 $iPrinted edition:$z9783319618067
830 0 $aSocio-affective computing ;$v6.$x2509-5706
856 40 $3EBSCOhost$uhttps://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=1588701
856 40 $3ProQuest Ebook Central$uhttps://public.ebookcentral.proquest.com/choice/publicfullrecord.aspx?p=5014247
856 40 $3SpringerLink$uhttps://doi.org/10.1007/978-3-319-61807-4
856 40 $3SpringerLink$uhttps://link.springer.com/book/10.1007/978-3-319-61807-4
856 40 $3SpringerLink$uhttps://link.springer.com/book/10.1007/978-3-319-61806-7
856 40 $3VLeBooks$uhttp://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9783319618074
856 40 $uhttp://VH7QX3XE2P.search.serialssolutions.com/?V=1.0&L=VH7QX3XE2P&S=JCs&C=TC0001861377&T=marc&tab=BOOKS
938 $aAskews and Holts Library Services$bASKH$nAH33856701
938 $aProQuest Ebook Central$bEBLB$nEBL5014247
938 $aEBSCOhost$bEBSC$n1588701
938 $aYBP Library Services$bYANK$n14777775
029 1 $aAU@$b000060719170
029 1 $aGBVCP$b89748598X
029 1 $aUKMGB$b019166403
994 $aZ0$bP4A
948 $hNO HOLDINGS IN P4A - 226 OTHER HOLDINGS