<?xml version="1.0" encoding="UTF-8" ?>
<oai_dc:dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:title>A collection of challenging motion segmentation benchmark datasets</dc:title>
<dc:creator>Muhammad Habib, Mahmood</dc:creator>
<dc:creator>Diez, Yago</dc:creator>
<dc:creator>Salvi, Joaquim</dc:creator>
<dc:creator>Lladó Bardera, Xavier</dc:creator>
<dc:contributor>Ministerio de Ciencia e Innovación (Espanya)</dc:contributor>
<dc:contributor>Ministerio de Economía y Competitividad (Espanya)</dc:contributor>
<dc:subject>Imatges -- Processament</dc:subject>
<dc:subject>Image processing</dc:subject>
<dc:subject>Imatges -- Segmentació</dc:subject>
<dc:subject>Imaging segmentation</dc:subject>
<dc:subject>Visió per ordinador</dc:subject>
<dc:subject>Computer vision</dc:subject>
<dc:description>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</dc:description>
<dc:description>This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R)</dc:description>
<dc:date>info:eu-repo/date/embargoEnd/2026-01-01</dc:date>
<dc:date>2017-01</dc:date>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>info:eu-repo/semantics/publishedVersion</dc:type>
<dc:identifier>http://hdl.handle.net/10256/13152</dc:identifier>
<dc:relation>info:eu-repo/semantics/altIdentifier/doi/10.1016/j.patcog.2016.07.008</dc:relation>
<dc:relation>info:eu-repo/semantics/altIdentifier/issn/0031-3203</dc:relation>
<dc:language>eng</dc:language>
<dc:relation>info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</dc:relation>
<dc:source>© Pattern Recognition, 2017, vol. 61, p. 1-14</dc:source>
<dc:source>Articles publicats (D-ATC)</dc:source>
<dc:rights>Tots els drets reservats</dc:rights>
<dc:rights>info:eu-repo/semantics/embargoedAccess</dc:rights>
<dc:format>application/pdf</dc:format>
<dc:publisher>Elsevier</dc:publisher>
</oai_dc:dc>
<?xml version="1.0" encoding="UTF-8" ?>
<d:DIDL schemaLocation="urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd">
<d:DIDLInfo>
<dcterms:created schemaLocation="http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/dcterms.xsd">2016-11-17T07:23:07Z</dcterms:created>
</d:DIDLInfo>
<d:Item id="hdl_10256_13152">
<d:Descriptor>
<d:Statement mimeType="application/xml; charset=utf-8">
<dii:Identifier schemaLocation="urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd">urn:hdl:10256/13152</dii:Identifier>
</d:Statement>
</d:Descriptor>
<d:Descriptor>
<d:Statement mimeType="application/xml; charset=utf-8">
<oai_dc:dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:title>A collection of challenging motion segmentation benchmark datasets</dc:title>
<dc:creator>Muhammad Habib, Mahmood</dc:creator>
<dc:creator>Diez, Yago</dc:creator>
<dc:creator>Salvi, Joaquim</dc:creator>
<dc:creator>Lladó Bardera, Xavier</dc:creator>
<dc:contributor>Ministerio de Ciencia e Innovación (Espanya)</dc:contributor>
<dc:contributor>Ministerio de Economía y Competitividad (Espanya)</dc:contributor>
<dc:subject>Imatges -- Processament</dc:subject>
<dc:subject>Image processing</dc:subject>
<dc:subject>Imatges -- Segmentació</dc:subject>
<dc:subject>Imaging segmentation</dc:subject>
<dc:subject>Visió per ordinador</dc:subject>
<dc:subject>Computer vision</dc:subject>
<dc:description>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</dc:description>
<dc:date>2016-11-17T07:23:07Z</dc:date>
<dc:date>2016-11-17T07:23:07Z</dc:date>
<dc:date>2017-01</dc:date>
<dc:date>info:eu-repo/date/embargoEnd/2026-01-01</dc:date>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:identifier>0031-3203</dc:identifier>
<dc:identifier>http://hdl.handle.net/10256/13152</dc:identifier>
<dc:identifier>http://dx.doi.org/10.1016/j.patcog.2016.07.008</dc:identifier>
<dc:identifier>025389</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008</dc:relation>
<dc:relation>© Pattern Recognition, 2017, vol. 61, p. 1-14</dc:relation>
<dc:relation>Articles publicats (D-ATC)</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</dc:relation>
<dc:relation>FP7</dc:relation>
<dc:relation>PANDORA</dc:relation>
<dc:relation>HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE</dc:relation>
<dc:relation>ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</dc:relation>
<dc:rights>info:eu-repo/semantics/embargoedAccess</dc:rights>
<dc:rights>Tots els drets reservats</dc:rights>
<dc:publisher>Elsevier</dc:publisher>
</oai_dc:dc>
</d:Statement>
</d:Descriptor>
<d:Component id="10256_13152_1">
</d:Component>
</d:Item>
</d:DIDL>
<?xml version="1.0" encoding="UTF-8" ?>
<dim:dim schemaLocation="http://www.dspace.org/xmlns/dspace/dim http://www.dspace.org/schema/dim.xsd">
<dim:field element="contributor" mdschema="dc" qualifier="author">Muhammad Habib, Mahmood</dim:field>
<dim:field element="contributor" mdschema="dc" qualifier="author">Diez, Yago</dim:field>
<dim:field element="contributor" mdschema="dc" qualifier="author">Salvi, Joaquim</dim:field>
<dim:field element="contributor" mdschema="dc" qualifier="author">Lladó Bardera, Xavier</dim:field>
<dim:field element="contributor" mdschema="dc" qualifier="funder">Ministerio de Ciencia e Innovación (Espanya)</dim:field>
<dim:field element="contributor" mdschema="dc" qualifier="funder">Ministerio de Economía y Competitividad (Espanya)</dim:field>
<dim:field element="date" mdschema="dc" qualifier="accessioned">2016-11-17T07:23:07Z</dim:field>
<dim:field element="date" mdschema="dc" qualifier="available">2016-11-17T07:23:07Z</dim:field>
<dim:field element="date" mdschema="dc" qualifier="issued">2017-01</dim:field>
<dim:field element="date" mdschema="dc" qualifier="embargoEndDate">info:eu-repo/date/embargoEnd/2026-01-01</dim:field>
<dim:field element="identifier" mdschema="dc" qualifier="issn">0031-3203</dim:field>
<dim:field element="identifier" mdschema="dc" qualifier="uri">http://hdl.handle.net/10256/13152</dim:field>
<dim:field element="identifier" mdschema="dc" qualifier="doi">http://dx.doi.org/10.1016/j.patcog.2016.07.008</dim:field>
<dim:field element="identifier" mdschema="dc" qualifier="idgrec">025389</dim:field>
<dim:field element="description" mdschema="dc" qualifier="abstract">An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</dim:field>
<dim:field element="description" mdschema="dc" qualifier="provenance">Submitted by Claudia Plana (claudia.plana@udg.edu) on 2016-11-17T07:23:07Z No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5)</dim:field>
<dim:field element="description" mdschema="dc" qualifier="provenance">Made available in DSpace on 2016-11-17T07:23:07Z (GMT). No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5) Previous issue date: 2017-01</dim:field>
<dim:field element="description" mdschema="dc" qualifier="sponsorship">This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R)</dim:field>
<dim:field element="format" mdschema="dc" qualifier="mimetype">application/pdf</dim:field>
<dim:field element="language" mdschema="dc" qualifier="iso">eng</dim:field>
<dim:field element="publisher" mdschema="dc">Elsevier</dim:field>
<dim:field element="relation" mdschema="dc">info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</dim:field>
<dim:field element="relation" mdschema="dc">info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="isformatof">Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="ispartof">© Pattern Recognition, 2017, vol. 61, p. 1-14</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="ispartofseries">Articles publicats (D-ATC)</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="projectID">info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="FundingProgramme">FP7</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="ProjectAcronym">PANDORA</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="ProjectAcronym">HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE</dim:field>
<dim:field element="relation" mdschema="dc" qualifier="ProjectAcronym">ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA</dim:field>
<dim:field element="rights" mdschema="dc">Tots els drets reservats</dim:field>
<dim:field element="rights" mdschema="dc" qualifier="accessRights">info:eu-repo/semantics/embargoedAccess</dim:field>
<dim:field element="subject" mdschema="dc">Imatges -- Processament</dim:field>
<dim:field element="subject" mdschema="dc">Image processing</dim:field>
<dim:field element="subject" mdschema="dc">Imatges -- Segmentació</dim:field>
<dim:field element="subject" mdschema="dc">Imaging segmentation</dim:field>
<dim:field element="subject" mdschema="dc">Visió per ordinador</dim:field>
<dim:field element="subject" mdschema="dc">Computer vision</dim:field>
<dim:field element="title" mdschema="dc">A collection of challenging motion segmentation benchmark datasets</dim:field>
<dim:field element="type" mdschema="dc">info:eu-repo/semantics/article</dim:field>
<dim:field element="type" mdschema="dc" qualifier="version">info:eu-repo/semantics/publishedVersion</dim:field>
<dim:field element="embargo" mdschema="dc" qualifier="terms">Cap</dim:field>
</dim:dim>
<?xml version="1.0" encoding="UTF-8" ?>
<rdf:RDF schemaLocation="http://www.w3.org/1999/02/22-rdf-syntax-ns# http://www.europeana.eu/schemas/edm/EDM.xsd">
<edm:ProvidedCHO about="https://catalonica.bnc.cat/catalonicahub/lod/oai:dugi-doc.udg.edu:10256_--_13152#ent0">
<dc:contributor>Ministerio de Ciencia e Innovación (Espanya)</dc:contributor>
<dc:contributor>Ministerio de Economía y Competitividad (Espanya)</dc:contributor>
<dc:creator>Muhammad Habib, Mahmood</dc:creator>
<dc:creator>Diez, Yago</dc:creator>
<dc:creator>Salvi, Joaquim</dc:creator>
<dc:creator>Lladó Bardera, Xavier</dc:creator>
<dc:date>info:eu-repo/date/embargoEnd/2026-01-01</dc:date>
<dc:date>2017-01</dc:date>
<dc:description>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</dc:description>
<dc:description>This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R)</dc:description>
<dc:identifier>http://hdl.handle.net/10256/13152</dc:identifier>
<dc:language>eng</dc:language>
<dc:publisher>Elsevier</dc:publisher>
<dc:relation>info:eu-repo/semantics/altIdentifier/doi/10.1016/j.patcog.2016.07.008</dc:relation>
<dc:relation>info:eu-repo/semantics/altIdentifier/issn/0031-3203</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</dc:relation>
<dc:rights>Tots els drets reservats</dc:rights>
<dc:rights>info:eu-repo/semantics/embargoedAccess</dc:rights>
<dc:source>© Pattern Recognition, 2017, vol. 61, p. 1-14</dc:source>
<dc:source>Articles publicats (D-ATC)</dc:source>
<dc:subject>Imatges -- Processament</dc:subject>
<dc:subject>Image processing</dc:subject>
<dc:subject>Imatges -- Segmentació</dc:subject>
<dc:subject>Imaging segmentation</dc:subject>
<dc:subject>Visió per ordinador</dc:subject>
<dc:subject>Computer vision</dc:subject>
<dc:title>A collection of challenging motion segmentation benchmark datasets</dc:title>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>info:eu-repo/semantics/publishedVersion</dc:type>
<edm:type>TEXT</edm:type>
</edm:ProvidedCHO>
<ore:Aggregation about="https://catalonica.bnc.cat/catalonicahub/lod/oai:dugi-doc.udg.edu:10256_--_13152#ent1">
<edm:dataProvider>DUGiDocs. Recerca</edm:dataProvider>
<edm:provider>Catalònica</edm:provider>
</ore:Aggregation>
</rdf:RDF>
<?xml version="1.0" encoding="UTF-8" ?>
<thesis schemaLocation="http://www.ndltd.org/standards/metadata/etdms/1.0/ http://www.ndltd.org/standards/metadata/etdms/1.0/etdms.xsd">
<title>A collection of challenging motion segmentation benchmark datasets</title>
<creator>Muhammad Habib, Mahmood</creator>
<creator>Diez, Yago</creator>
<creator>Salvi, Joaquim</creator>
<creator>Lladó Bardera, Xavier</creator>
<contributor>Ministerio de Ciencia e Innovación (Espanya)</contributor>
<contributor>Ministerio de Economía y Competitividad (Espanya)</contributor>
<subject>Imatges -- Processament</subject>
<subject>Image processing</subject>
<subject>Imatges -- Segmentació</subject>
<subject>Imaging segmentation</subject>
<subject>Visió per ordinador</subject>
<subject>Computer vision</subject>
<description>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</description>
<date>2016-11-17</date>
<date>2016-11-17</date>
<date>2017-01</date>
<date>info:eu-re</date>
<type>info:eu-repo/semantics/article</type>
<identifier>0031-3203</identifier>
<identifier>http://hdl.handle.net/10256/13152</identifier>
<identifier>http://dx.doi.org/10.1016/j.patcog.2016.07.008</identifier>
<identifier>025389</identifier>
<language>eng</language>
<relation>Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008</relation>
<relation>© Pattern Recognition, 2017, vol. 61, p. 1-14</relation>
<relation>Articles publicats (D-ATC)</relation>
<relation>info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</relation>
<relation>FP7</relation>
<relation>PANDORA</relation>
<relation>HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE</relation>
<relation>ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA</relation>
<relation>info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</relation>
<relation>info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</relation>
<rights>info:eu-repo/semantics/embargoedAccess</rights>
<rights>Tots els drets reservats</rights>
<publisher>Elsevier</publisher>
</thesis>
<?xml version="1.0" encoding="UTF-8" ?>
<record schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
<leader>00925njm 22002777a 4500</leader>
<datafield ind1=" " ind2=" " tag="042">
<subfield code="a">dc</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="720">
<subfield code="a">Muhammad Habib, Mahmood</subfield>
<subfield code="e">author</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="720">
<subfield code="a">Diez, Yago</subfield>
<subfield code="e">author</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="720">
<subfield code="a">Salvi, Joaquim</subfield>
<subfield code="e">author</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="720">
<subfield code="a">Lladó Bardera, Xavier</subfield>
<subfield code="e">author</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="260">
<subfield code="c">2017-01</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="520">
<subfield code="a">An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</subfield>
</datafield>
<datafield ind1="8" ind2=" " tag="024">
<subfield code="a">0031-3203</subfield>
</datafield>
<datafield ind1="8" ind2=" " tag="024">
<subfield code="a">http://hdl.handle.net/10256/13152</subfield>
</datafield>
<datafield ind1="8" ind2=" " tag="024">
<subfield code="a">http://dx.doi.org/10.1016/j.patcog.2016.07.008</subfield>
</datafield>
<datafield ind1="8" ind2=" " tag="024">
<subfield code="a">025389</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Imatges -- Processament</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Image processing</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Imatges -- Segmentació</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Imaging segmentation</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Visió per ordinador</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Computer vision</subfield>
</datafield>
<datafield ind1="0" ind2="0" tag="245">
<subfield code="a">A collection of challenging motion segmentation benchmark datasets</subfield>
</datafield>
</record>
<?xml version="1.0" encoding="UTF-8" ?>
<mets ID=" DSpace_ITEM_10256-13152" OBJID=" hdl:10256/13152" PROFILE="DSpace METS SIP Profile 1.0" TYPE="DSpace ITEM" schemaLocation="http://www.loc.gov/METS/ http://www.loc.gov/standards/mets/mets.xsd">
<metsHdr CREATEDATE="2024-10-26T00:41:00Z">
<agent ROLE="CUSTODIAN" TYPE="ORGANIZATION">
<name>DUGiDocs</name>
</agent>
</metsHdr>
<dmdSec ID="DMD_10256_13152">
<mdWrap MDTYPE="MODS">
<xmlData schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
<mods:mods schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
<mods:name>
<mods:role>
<mods:roleTerm type="text">author</mods:roleTerm>
</mods:role>
<mods:namePart>Muhammad Habib, Mahmood</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">author</mods:roleTerm>
</mods:role>
<mods:namePart>Diez, Yago</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">author</mods:roleTerm>
</mods:role>
<mods:namePart>Salvi, Joaquim</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">author</mods:roleTerm>
</mods:role>
<mods:namePart>Lladó Bardera, Xavier</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">funder</mods:roleTerm>
</mods:role>
<mods:namePart>Ministerio de Ciencia e Innovación (Espanya)</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">funder</mods:roleTerm>
</mods:role>
<mods:namePart>Ministerio de Economía y Competitividad (Espanya)</mods:namePart>
</mods:name>
<mods:extension>
<mods:dateAccessioned encoding="iso8601">2016-11-17T07:23:07Z</mods:dateAccessioned>
</mods:extension>
<mods:extension>
<mods:dateAvailable encoding="iso8601">2016-11-17T07:23:07Z</mods:dateAvailable>
</mods:extension>
<mods:originInfo>
<mods:dateIssued encoding="iso8601">2017-01</mods:dateIssued>
</mods:originInfo>
<mods:identifier type="issn">0031-3203</mods:identifier>
<mods:identifier type="uri">http://hdl.handle.net/10256/13152</mods:identifier>
<mods:identifier type="doi">http://dx.doi.org/10.1016/j.patcog.2016.07.008</mods:identifier>
<mods:identifier type="idgrec">025389</mods:identifier>
<mods:abstract>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</mods:abstract>
<mods:language>
<mods:languageTerm authority="rfc3066">eng</mods:languageTerm>
</mods:language>
<mods:accessCondition type="useAndReproduction">Tots els drets reservats</mods:accessCondition>
<mods:subject>
<mods:topic>Imatges -- Processament</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Image processing</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Imatges -- Segmentació</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Imaging segmentation</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Visió per ordinador</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Computer vision</mods:topic>
</mods:subject>
<mods:titleInfo>
<mods:title>A collection of challenging motion segmentation benchmark datasets</mods:title>
</mods:titleInfo>
<mods:genre>info:eu-repo/semantics/article</mods:genre>
</mods:mods>
</xmlData>
</mdWrap>
</dmdSec>
<amdSec ID="TMD_10256_13152">
<rightsMD ID="RIG_10256_13152">
<mdWrap MDTYPE="OTHER" MIMETYPE="text/plain" OTHERMDTYPE="DSpaceDepositLicense">
<binData>Q29uZGljaW9ucyBkZWwgZGlww7JzaXQKCgpQZXIgcG9kZXIgcHVibGljYXIgZWwgZG9jdW1lbnQgYWwgRFVHaSBlbnMgY2FsIHVuYSBhdXRvcml0emFjacOzIHZvc3RyYSBwZXIgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVsIHRleHQgZW4gbGVzIGNvbmRpY2lvbnMgc2Vnw7xlbnRzOgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGEgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVscyBkb2N1bWVudHMsIGRlIGZvcm1hIMOtbnRlZ3JhIG8gcGFyY2lhbCwgc2Vuc2Ugb2J0ZW5pciBjYXAgYmVuZWZpY2kgY29tZXJjaWFsLCDDum5pY2FtZW50IGFtYiBmaW5hbGl0YXRzIGRlIHJlY2VyY2EgaSBzdXBvcnQgbyBpbOKAomx1c3RyYWNpw7MgZGUgbGEgZG9jw6huY2lhLCBtaXRqYW7Dp2FudCBsYSBpbmNvcnBvcmFjacOzIGRlbHMgZG9jdW1lbnRzIGEgdW5hIGJhc2UgZGUgZGFkZXMgZWxlY3Ryw7JuaWNhIGTigJlhY2PDqXMgb2JlcnQuCgoKUGVyIGEgYXF1ZXN0ZXMgZmluYWxpdGF0cyBjZWRlaXhvIGRlIGZvcm1hIG5vIGV4Y2x1c2l2YSwgc2Vuc2UgbMOtbWl0IHRlbXBvcmFsIG5pIHRlcnJpdG9yaWFsLCBlbHMgZHJldHMgZOKAmWV4cGxvdGFjacOzIHF1ZSBlbSBjb3JyZXNwb25lbiBjb20gYSBhdXRvci9hLgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGxhIGPDsnBpYSBkZWxzIGRvY3VtZW50cyBlbiB1biBhbHRyZSBzdXBvcnQsIGFkYXB0YXItbG9zIG8gdHJhbnNmb3JtYXItbG9zIGFtYiBmaW5hbGl0YXRzIGRlIGNvbnNlcnZhY2nDsyBvIGRpZnVzacOzLCBpIGzigJlhY29yZCBhbWIgdGVyY2VyZXMgcGVyc29uZXMgcGVyIHJlYWxpdHphciBhcXVlc3RhIGNvbnNlcnZhY2nDsyBpIGRpZnVzacOzIHJlc3BlY3RhbnQgbGEgY2Vzc2nDsyBkZSBkcmV0cyBxdWUgYXJhIGVmZWN0dW8uCgoKLSBFbSByZXNlcnZvIGxhIHJlc3RhIGRlIGRyZXRzIGFscyBxdWFscyBubyBlcyBmYSByZWZlcsOobmNpYSBlbiBlbCBwcmVzZW50IGRvY3VtZW50LgoKCkxhIFVkRyBhZ3JhZWl4IGxhIHZvc3RyYSBjb2zigKJsYWJvcmFjacOzLgo=</binData>
</mdWrap>
</rightsMD>
</amdSec>
<amdSec ID="FO_10256_13152_1">
<techMD ID="TECH_O_10256_13152_1">
<mdWrap MDTYPE="PREMIS">
<xmlData schemaLocation="http://www.loc.gov/standards/premis http://www.loc.gov/standards/premis/PREMIS-v1-0.xsd">
<premis:premis>
<premis:object>
<premis:objectIdentifier>
<premis:objectIdentifierType>URL</premis:objectIdentifierType>
<premis:objectIdentifierValue>https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf</premis:objectIdentifierValue>
</premis:objectIdentifier>
<premis:objectCategory>File</premis:objectCategory>
<premis:objectCharacteristics>
<premis:fixity>
<premis:messageDigestAlgorithm>MD5</premis:messageDigestAlgorithm>
<premis:messageDigest>431770cc55cfb7a9d9e3fbac7f90554c</premis:messageDigest>
</premis:fixity>
<premis:size>6965871</premis:size>
<premis:format>
<premis:formatDesignation>
<premis:formatName>application/pdf</premis:formatName>
</premis:formatDesignation>
</premis:format>
</premis:objectCharacteristics>
<premis:originalName>CollectionChallengingMotion.pdf</premis:originalName>
</premis:object>
</premis:premis>
</xmlData>
</mdWrap>
</techMD>
</amdSec>
<fileSec>
<fileGrp USE="ORIGINAL">
<file ADMID="FO_10256_13152_1" CHECKSUM="431770cc55cfb7a9d9e3fbac7f90554c" CHECKSUMTYPE="MD5" GROUPID="GROUP_BITSTREAM_10256_13152_1" ID="BITSTREAM_ORIGINAL_10256_13152_1" MIMETYPE="application/pdf" SEQ="1" SIZE="6965871">
</file>
</fileGrp>
</fileSec>
<structMap LABEL="DSpace Object" TYPE="LOGICAL">
<div ADMID="DMD_10256_13152" TYPE="DSpace Object Contents">
<div TYPE="DSpace BITSTREAM">
</div>
</div>
</structMap>
</mets>
<?xml version="1.0" encoding="UTF-8" ?>
<mods:mods schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
<mods:name>
<mods:namePart>Muhammad Habib, Mahmood</mods:namePart>
</mods:name>
<mods:name>
<mods:namePart>Diez, Yago</mods:namePart>
</mods:name>
<mods:name>
<mods:namePart>Salvi, Joaquim</mods:namePart>
</mods:name>
<mods:name>
<mods:namePart>Lladó Bardera, Xavier</mods:namePart>
</mods:name>
<mods:extension>
<mods:dateAvailable encoding="iso8601">2016-11-17T07:23:07Z</mods:dateAvailable>
</mods:extension>
<mods:extension>
<mods:dateAccessioned encoding="iso8601">2016-11-17T07:23:07Z</mods:dateAccessioned>
</mods:extension>
<mods:originInfo>
<mods:dateIssued encoding="iso8601">2017-01</mods:dateIssued>
</mods:originInfo>
<mods:identifier type="issn">0031-3203</mods:identifier>
<mods:identifier type="uri">http://hdl.handle.net/10256/13152</mods:identifier>
<mods:identifier type="doi">http://dx.doi.org/10.1016/j.patcog.2016.07.008</mods:identifier>
<mods:identifier type="idgrec">025389</mods:identifier>
<mods:abstract>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</mods:abstract>
<mods:language>
<mods:languageTerm>eng</mods:languageTerm>
</mods:language>
<mods:accessCondition type="useAndReproduction">info:eu-repo/semantics/embargoedAccess</mods:accessCondition>
<mods:accessCondition type="useAndReproduction">Tots els drets reservats</mods:accessCondition>
<mods:subject>
<mods:topic>Imatges -- Processament</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Image processing</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Imatges -- Segmentació</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Imaging segmentation</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Visió per ordinador</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Computer vision</mods:topic>
</mods:subject>
<mods:titleInfo>
<mods:title>A collection of challenging motion segmentation benchmark datasets</mods:title>
</mods:titleInfo>
<mods:genre>info:eu-repo/semantics/article</mods:genre>
</mods:mods>
<?xml version="1.0" encoding="UTF-8" ?>
<datacite:resource schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4-2/metadata.xsd">
<datacite:identifier identifierType="Handle">http://hdl.handle.net/10256/13152</datacite:identifier>
<datacite:titles>
<datacite:title>A collection of challenging motion segmentation benchmark datasets</datacite:title>
</datacite:titles>
<datacite:creators>
<datacite:creator>
<datacite:creatorName>Muhammad Habib, Mahmood</datacite:creatorName>
</datacite:creator>
<datacite:creator>
<datacite:creatorName>Diez, Yago</datacite:creatorName>
<datacite:nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-4521-9113</datacite:nameIdentifier>
</datacite:creator>
<datacite:creator>
<datacite:creatorName>Salvi, Joaquim</datacite:creatorName>
<datacite:nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-9482-7126</datacite:nameIdentifier>
</datacite:creator>
<datacite:creator>
<datacite:creatorName>Lladó Bardera, Xavier</datacite:creatorName>
<datacite:nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-2777-3479</datacite:nameIdentifier>
</datacite:creator>
</datacite:creators>
<datacite:subjects>
<datacite:subject>Imatges -- Processament</datacite:subject>
<datacite:subject>Image processing</datacite:subject>
<datacite:subject>Imatges -- Segmentació</datacite:subject>
<datacite:subject>Imaging segmentation</datacite:subject>
<datacite:subject>Visió per ordinador</datacite:subject>
<datacite:subject>Computer vision</datacite:subject>
</datacite:subjects>
<datacite:descriptions>
<datacite:description descriptionType="Abstract">An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</datacite:description>
</datacite:descriptions>
<datacite:dates>
<datacite:date dateType="Issued">2017-01</datacite:date>
</datacite:dates>
<datacite:publicationYear>2017</datacite:publicationYear>
<datacite:languages>
<datacite:language>eng</datacite:language>
</datacite:languages>
<datacite:relatedIdentifiers>
<datacite:relatedIdentifier relatedIdentifierType="URL" relationType="IsSupplementTo">0031-3203</datacite:relatedIdentifier>
<datacite:relatedIdentifier relatedIdentifierType="DOI" relationType="IsSupplementTo">http://dx.doi.org/10.1016/j.patcog.2016.07.008</datacite:relatedIdentifier>
<datacite:relatedIdentifier relatedIdentifierType="URL" relationType="IsSupplementTo">025389</datacite:relatedIdentifier>
</datacite:relatedIdentifiers>
<datacite:rightsList>
<datacite:rights>Tots els drets reservats</datacite:rights>
<datacite:rights rightsURI="info:eu-repo/semantics/embargoedAccess">info:eu-repo/semantics/embargoedAccess</datacite:rights>
</datacite:rightsList>
<datacite:formats>
<datacite:format>application/pdf</datacite:format>
</datacite:formats>
<datacite:publisher>Elsevier</datacite:publisher>
</datacite:resource>
<?xml version="1.0" encoding="UTF-8" ?>
<atom:entry schemaLocation="http://www.w3.org/2005/Atom http://www.kbcafe.com/rss/atom.xsd.xml">
<atom:id>http://hdl.handle.net/10256/13152/ore.xml</atom:id>
<atom:published>2016-11-17T07:23:07Z</atom:published>
<atom:updated>2016-11-17T07:23:07Z</atom:updated>
<atom:source>
<atom:generator>DUGiDocs</atom:generator>
</atom:source>
<atom:title>A collection of challenging motion segmentation benchmark datasets</atom:title>
<atom:author>
<atom:name>Muhammad Habib, Mahmood</atom:name>
</atom:author>
<atom:author>
<atom:name>Diez, Yago</atom:name>
</atom:author>
<atom:author>
<atom:name>Salvi, Joaquim</atom:name>
</atom:author>
<atom:author>
<atom:name>Lladó Bardera, Xavier</atom:name>
</atom:author>
<oreatom:triples>
<rdf:Description about="http://hdl.handle.net/10256/13152/ore.xml#atom">
<dcterms:modified>2016-11-17T07:23:07Z</dcterms:modified>
</rdf:Description>
<rdf:Description about="https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf">
<dcterms:description>ORIGINAL</dcterms:description>
</rdf:Description>
<rdf:Description about="https://dugi-doc.udg.edu/bitstream/10256/13152/2/license.txt">
<dcterms:description>LICENSE</dcterms:description>
</rdf:Description>
<rdf:Description about="https://dugi-doc.udg.edu/bitstream/10256/13152/3/CollectionChallengingMotion.pdf.jpg">
<dcterms:description>THUMBNAIL</dcterms:description>
</rdf:Description>
</oreatom:triples>
</atom:entry>
<?xml version="1.0" encoding="UTF-8" ?>
<qdc:qualifieddc schemaLocation="http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd">
<dc:title>A collection of challenging motion segmentation benchmark datasets</dc:title>
<dc:creator>Muhammad Habib, Mahmood</dc:creator>
<dc:creator>Diez, Yago</dc:creator>
<dc:creator>Salvi, Joaquim</dc:creator>
<dc:creator>Lladó Bardera, Xavier</dc:creator>
<dc:contributor>Ministerio de Ciencia e Innovación (Espanya)</dc:contributor>
<dc:contributor>Ministerio de Economía y Competitividad (Espanya)</dc:contributor>
<dc:subject>Imatges -- Processament</dc:subject>
<dc:subject>Image processing</dc:subject>
<dc:subject>Imatges -- Segmentació</dc:subject>
<dc:subject>Imaging segmentation</dc:subject>
<dc:subject>Visió per ordinador</dc:subject>
<dc:subject>Computer vision</dc:subject>
<dcterms:abstract>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</dcterms:abstract>
<dcterms:issued>2017-01</dcterms:issued>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:identifier>0031-3203</dc:identifier>
<dc:identifier>http://hdl.handle.net/10256/13152</dc:identifier>
<dc:identifier>http://dx.doi.org/10.1016/j.patcog.2016.07.008</dc:identifier>
<dc:identifier>025389</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008</dc:relation>
<dc:relation>© Pattern Recognition, 2017, vol. 61, p. 1-14</dc:relation>
<dc:relation>Articles publicats (D-ATC)</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</dc:relation>
<dc:relation>FP7</dc:relation>
<dc:relation>PANDORA</dc:relation>
<dc:relation>HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE</dc:relation>
<dc:relation>ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</dc:relation>
<dc:rights>info:eu-repo/semantics/embargoedAccess</dc:rights>
<dc:rights>Tots els drets reservats</dc:rights>
<dc:publisher>Elsevier</dc:publisher>
</qdc:qualifieddc>
<?xml version="1.0" encoding="UTF-8" ?>
<rdf:RDF schemaLocation="http://www.openarchives.org/OAI/2.0/rdf/ http://www.openarchives.org/OAI/2.0/rdf.xsd">
<ow:Publication about="oai:dugi-doc.udg.edu:10256/13152">
<dc:title>A collection of challenging motion segmentation benchmark datasets</dc:title>
<dc:creator>Muhammad Habib, Mahmood</dc:creator>
<dc:creator>Diez, Yago</dc:creator>
<dc:creator>Salvi, Joaquim</dc:creator>
<dc:creator>Lladó Bardera, Xavier</dc:creator>
<dc:contributor>Ministerio de Ciencia e Innovación (Espanya)</dc:contributor>
<dc:contributor>Ministerio de Economía y Competitividad (Espanya)</dc:contributor>
<dc:subject>Imatges -- Processament</dc:subject>
<dc:subject>Image processing</dc:subject>
<dc:subject>Imatges -- Segmentació</dc:subject>
<dc:subject>Imaging segmentation</dc:subject>
<dc:subject>Visió per ordinador</dc:subject>
<dc:subject>Computer vision</dc:subject>
<dc:description>An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</dc:description>
<dc:date>2016-11-17T07:23:07Z</dc:date>
<dc:date>2016-11-17T07:23:07Z</dc:date>
<dc:date>2017-01</dc:date>
<dc:date>info:eu-repo/date/embargoEnd/2026-01-01</dc:date>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:identifier>0031-3203</dc:identifier>
<dc:identifier>http://hdl.handle.net/10256/13152</dc:identifier>
<dc:identifier>http://dx.doi.org/10.1016/j.patcog.2016.07.008</dc:identifier>
<dc:identifier>025389</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008</dc:relation>
<dc:relation>© Pattern Recognition, 2017, vol. 61, p. 1-14</dc:relation>
<dc:relation>Articles publicats (D-ATC)</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</dc:relation>
<dc:relation>FP7</dc:relation>
<dc:relation>PANDORA</dc:relation>
<dc:relation>HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE</dc:relation>
<dc:relation>ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</dc:relation>
<dc:relation>info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</dc:relation>
<dc:rights>info:eu-repo/semantics/embargoedAccess</dc:rights>
<dc:rights>Tots els drets reservats</dc:rights>
<dc:publisher>Elsevier</dc:publisher>
</ow:Publication>
</rdf:RDF>
<?xml version="1.0" encoding="UTF-8" ?>
<metadata schemaLocation="http://www.lyncode.com/xoai http://www.lyncode.com/xsd/xoai.xsd">
<element name="dc">
<element name="contributor">
<element name="author">
<element name="none">
<field name="value">Muhammad Habib, Mahmood</field>
<field name="value">Diez, Yago</field>
<field name="value">Salvi, Joaquim</field>
<field name="value">Lladó Bardera, Xavier</field>
</element>
</element>
<element name="funder">
<element name="none">
<field name="value">Ministerio de Ciencia e Innovación (Espanya)</field>
<field name="value">Ministerio de Economía y Competitividad (Espanya)</field>
</element>
</element>
</element>
<element name="date">
<element name="accessioned">
<element name="none">
<field name="value">2016-11-17T07:23:07Z</field>
</element>
</element>
<element name="available">
<element name="none">
<field name="value">2016-11-17T07:23:07Z</field>
</element>
</element>
<element name="issued">
<element name="none">
<field name="value">2017-01</field>
</element>
</element>
<element name="embargoEndDate">
<element name="none">
<field name="value">info:eu-repo/date/embargoEnd/2026-01-01</field>
</element>
</element>
</element>
<element name="identifier">
<element name="issn">
<element name="none">
<field name="value">0031-3203</field>
</element>
</element>
<element name="uri">
<element name="none">
<field name="value">http://hdl.handle.net/10256/13152</field>
</element>
</element>
<element name="doi">
<element name="none">
<field name="value">http://dx.doi.org/10.1016/j.patcog.2016.07.008</field>
</element>
</element>
<element name="idgrec">
<element name="none">
<field name="value">025389</field>
</element>
</element>
</element>
<element name="description">
<element name="abstract">
<element name="none">
<field name="value">An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/</field>
</element>
</element>
<element name="provenance">
<element name="none">
<field name="value">Submitted by Claudia Plana (claudia.plana@udg.edu) on 2016-11-17T07:23:07Z No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5)</field>
<field name="value">Made available in DSpace on 2016-11-17T07:23:07Z (GMT). No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5) Previous issue date: 2017-01</field>
</element>
</element>
<element name="sponsorship">
<element name="none">
<field name="value">This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R)</field>
</element>
</element>
</element>
<element name="format">
<element name="mimetype">
<element name="none">
<field name="value">application/pdf</field>
</element>
</element>
</element>
<element name="language">
<element name="iso">
<element name="none">
<field name="value">eng</field>
</element>
</element>
</element>
<element name="publisher">
<element name="none">
<field name="value">Elsevier</field>
</element>
</element>
<element name="relation">
<element name="none">
<field name="value">info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/</field>
<field name="value">info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/</field>
</element>
<element name="isformatof">
<element name="none">
<field name="value">Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008</field>
</element>
</element>
<element name="ispartof">
<element name="none">
<field name="value">© Pattern Recognition, 2017, vol. 61, p. 1-14</field>
</element>
</element>
<element name="ispartofseries">
<element name="none">
<field name="value">Articles publicats (D-ATC)</field>
</element>
</element>
<element name="projectID">
<element name="none">
<field name="value">info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA</field>
</element>
</element>
<element name="FundingProgramme">
<element name="none">
<field name="value">FP7</field>
</element>
</element>
<element name="ProjectAcronym">
<element name="none">
<field name="value">PANDORA</field>
<field name="value">HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE</field>
<field name="value">ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA</field>
</element>
</element>
</element>
<element name="rights">
<element name="none">
<field name="value">Tots els drets reservats</field>
</element>
<element name="accessRights">
<element name="none">
<field name="value">info:eu-repo/semantics/embargoedAccess</field>
</element>
</element>
</element>
<element name="subject">
<element name="none">
<field name="value">Imatges -- Processament</field>
<field name="value">Image processing</field>
<field name="value">Imatges -- Segmentació</field>
<field name="value">Imaging segmentation</field>
<field name="value">Visió per ordinador</field>
<field name="value">Computer vision</field>
</element>
</element>
<element name="title">
<element name="none">
<field name="value">A collection of challenging motion segmentation benchmark datasets</field>
</element>
</element>
<element name="type">
<element name="none">
<field name="value">info:eu-repo/semantics/article</field>
</element>
<element name="version">
<element name="none">
<field name="value">info:eu-repo/semantics/publishedVersion</field>
</element>
</element>
</element>
<element name="embargo">
<element name="terms">
<element name="none">
<field name="value">Cap</field>
</element>
</element>
</element>
</element>
<element name="adm">
<element name="sets">
<element name="hidden">
<element name="none">
<field name="value">NO</field>
</element>
</element>
</element>
</element>
<element name="bundles">
<element name="bundle">
<field name="name">ORIGINAL</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">CollectionChallengingMotion.pdf</field>
<field name="originalName">CollectionChallengingMotion.pdf</field>
<field name="format">application/pdf</field>
<field name="size">6965871</field>
<field name="url">https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf</field>
<field name="checksum">431770cc55cfb7a9d9e3fbac7f90554c</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">1</field>
</element>
</element>
</element>
<element name="bundle">
<field name="name">LICENSE</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">license.txt</field>
<field name="originalName">license.txt</field>
<field name="format">text/plain</field>
<field name="size">1079</field>
<field name="url">https://dugi-doc.udg.edu/bitstream/10256/13152/2/license.txt</field>
<field name="checksum">0d4b4c458d95d1eb4b29247ea5bd4e04</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">2</field>
</element>
</element>
</element>
<element name="bundle">
<field name="name">THUMBNAIL</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">CollectionChallengingMotion.pdf.jpg</field>
<field name="originalName">CollectionChallengingMotion.pdf.jpg</field>
<field name="description">Generated Thumbnail</field>
<field name="format">image/jpeg</field>
<field name="size">3325</field>
<field name="url">https://dugi-doc.udg.edu/bitstream/10256/13152/3/CollectionChallengingMotion.pdf.jpg</field>
<field name="checksum">9429b58149c2c6500622383b8489393b</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">3</field>
</element>
</element>
</element>
</element>
<element name="others">
<field name="handle">10256/13152</field>
<field name="identifier">oai:dugi-doc.udg.edu:10256/13152</field>
<field name="lastModifyDate">2024-07-08 12:58:35.598</field>
</element>
<element name="repository">
<field name="name">DUGiDocs</field>
<field name="mail">oriol.olive@udg.edu</field>
</element>
<element name="license">
<field name="bin">Q29uZGljaW9ucyBkZWwgZGlww7JzaXQKCgpQZXIgcG9kZXIgcHVibGljYXIgZWwgZG9jdW1lbnQgYWwgRFVHaSBlbnMgY2FsIHVuYSBhdXRvcml0emFjacOzIHZvc3RyYSBwZXIgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVsIHRleHQgZW4gbGVzIGNvbmRpY2lvbnMgc2Vnw7xlbnRzOgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGEgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVscyBkb2N1bWVudHMsIGRlIGZvcm1hIMOtbnRlZ3JhIG8gcGFyY2lhbCwgc2Vuc2Ugb2J0ZW5pciBjYXAgYmVuZWZpY2kgY29tZXJjaWFsLCDDum5pY2FtZW50IGFtYiBmaW5hbGl0YXRzIGRlIHJlY2VyY2EgaSBzdXBvcnQgbyBpbOKAomx1c3RyYWNpw7MgZGUgbGEgZG9jw6huY2lhLCBtaXRqYW7Dp2FudCBsYSBpbmNvcnBvcmFjacOzIGRlbHMgZG9jdW1lbnRzIGEgdW5hIGJhc2UgZGUgZGFkZXMgZWxlY3Ryw7JuaWNhIGTigJlhY2PDqXMgb2JlcnQuCgoKUGVyIGEgYXF1ZXN0ZXMgZmluYWxpdGF0cyBjZWRlaXhvIGRlIGZvcm1hIG5vIGV4Y2x1c2l2YSwgc2Vuc2UgbMOtbWl0IHRlbXBvcmFsIG5pIHRlcnJpdG9yaWFsLCBlbHMgZHJldHMgZOKAmWV4cGxvdGFjacOzIHF1ZSBlbSBjb3JyZXNwb25lbiBjb20gYSBhdXRvci9hLgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGxhIGPDsnBpYSBkZWxzIGRvY3VtZW50cyBlbiB1biBhbHRyZSBzdXBvcnQsIGFkYXB0YXItbG9zIG8gdHJhbnNmb3JtYXItbG9zIGFtYiBmaW5hbGl0YXRzIGRlIGNvbnNlcnZhY2nDsyBvIGRpZnVzacOzLCBpIGzigJlhY29yZCBhbWIgdGVyY2VyZXMgcGVyc29uZXMgcGVyIHJlYWxpdHphciBhcXVlc3RhIGNvbnNlcnZhY2nDsyBpIGRpZnVzacOzIHJlc3BlY3RhbnQgbGEgY2Vzc2nDsyBkZSBkcmV0cyBxdWUgYXJhIGVmZWN0dW8uCgoKLSBFbSByZXNlcnZvIGxhIHJlc3RhIGRlIGRyZXRzIGFscyBxdWFscyBubyBlcyBmYSByZWZlcsOobmNpYSBlbiBlbCBwcmVzZW50IGRvY3VtZW50LgoKCkxhIFVkRyBhZ3JhZWl4IGxhIHZvc3RyYSBjb2zigKJsYWJvcmFjacOzLgo=</field>
</element>
</metadata>