• Català
  • Castellano
  • English
Logo Catalònica
  • Search
  • Collections
  • About us
  • Help
  • Directory
  • Professionals
It's in  › Log data
Linked Open Data
A collection of challenging motion segmentation benchmark datasets
Resource identifiers
http://hdl.handle.net/10256/13152
Origin
(DUGiDocs. Recerca)

File

Title:
A collection of challenging motion segmentation benchmark datasets
Subject:
Imatges -- Processament
Image processing
Imatges -- Segmentació
Imaging segmentation
Visió per ordinador
Computer vision
Description:
An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/
This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R)
Origin:
© Pattern Recognition, 2017, vol. 61, p. 1-14
Articles publicats (D-ATC)
Idioma:
English
Relationship:
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.patcog.2016.07.008
info:eu-repo/semantics/altIdentifier/issn/0031-3203
info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/
info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/
info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA
Author/Producer:
Muhammad Habib, Mahmood
Diez, Yago
Salvi, Joaquim
Lladó Bardera, Xavier
Editor:
Elsevier
Other collaborators/producers:
Ministerio de Ciencia e Innovación (Espanya)
Ministerio de Economía y Competitividad (Espanya)
Rights:
Tots els drets reservats
info:eu-repo/semantics/embargoedAccess
Date:
info:eu-repo/date/embargoEnd/2026-01-01
2017-01
Resource type:
info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
Format:
application/pdf

oai_dc

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < oai_dc:dc schemaLocation =" http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd " >

    1. < dc:title > A collection of challenging motion segmentation benchmark datasets </ dc:title >

    2. < dc:creator > Muhammad Habib, Mahmood </ dc:creator >

    3. < dc:creator > Diez, Yago </ dc:creator >

    4. < dc:creator > Salvi, Joaquim </ dc:creator >

    5. < dc:creator > Lladó Bardera, Xavier </ dc:creator >

    6. < dc:contributor > Ministerio de Ciencia e Innovación (Espanya) </ dc:contributor >

    7. < dc:contributor > Ministerio de Economía y Competitividad (Espanya) </ dc:contributor >

    8. < dc:subject > Imatges -- Processament </ dc:subject >

    9. < dc:subject > Image processing </ dc:subject >

    10. < dc:subject > Imatges -- Segmentació </ dc:subject >

    11. < dc:subject > Imaging segmentation </ dc:subject >

    12. < dc:subject > Visió per ordinador </ dc:subject >

    13. < dc:subject > Computer vision </ dc:subject >

    14. < dc:description > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ dc:description >

    15. < dc:description > This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R) </ dc:description >

    16. < dc:date > info:eu-repo/date/embargoEnd/2026-01-01 </ dc:date >

    17. < dc:date > 2017-01 </ dc:date >

    18. < dc:type > info:eu-repo/semantics/article </ dc:type >

    19. < dc:type > info:eu-repo/semantics/publishedVersion </ dc:type >

    20. < dc:identifier > http://hdl.handle.net/10256/13152 </ dc:identifier >

    21. < dc:relation > info:eu-repo/semantics/altIdentifier/doi/10.1016/j.patcog.2016.07.008 </ dc:relation >

    22. < dc:relation > info:eu-repo/semantics/altIdentifier/issn/0031-3203 </ dc:relation >

    23. < dc:language > eng </ dc:language >

    24. < dc:relation > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ dc:relation >

    25. < dc:relation > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ dc:relation >

    26. < dc:relation > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ dc:relation >

    27. < dc:source > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ dc:source >

    28. < dc:source > Articles publicats (D-ATC) </ dc:source >

    29. < dc:rights > Tots els drets reservats </ dc:rights >

    30. < dc:rights > info:eu-repo/semantics/embargoedAccess </ dc:rights >

    31. < dc:format > application/pdf </ dc:format >

    32. < dc:publisher > Elsevier </ dc:publisher >

    </ oai_dc:dc >

didl

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < d:DIDL schemaLocation =" urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd " >

    1. < d:DIDLInfo >

      1. < dcterms:created schemaLocation =" http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/dcterms.xsd " > 2016-11-17T07:23:07Z </ dcterms:created >

      </ d:DIDLInfo >

    2. < d:Item id =" hdl_10256_13152 " >

      1. < d:Descriptor >

        1. < d:Statement mimeType =" application/xml; charset=utf-8 " >

          1. < dii:Identifier schemaLocation =" urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd " > urn:hdl:10256/13152 </ dii:Identifier >

          </ d:Statement >

        </ d:Descriptor >

      2. < d:Descriptor >

        1. < d:Statement mimeType =" application/xml; charset=utf-8 " >

          1. < oai_dc:dc schemaLocation =" http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd " >

            1. < dc:title > A collection of challenging motion segmentation benchmark datasets </ dc:title >

            2. < dc:creator > Muhammad Habib, Mahmood </ dc:creator >

            3. < dc:creator > Diez, Yago </ dc:creator >

            4. < dc:creator > Salvi, Joaquim </ dc:creator >

            5. < dc:creator > Lladó Bardera, Xavier </ dc:creator >

            6. < dc:contributor > Ministerio de Ciencia e Innovación (Espanya) </ dc:contributor >

            7. < dc:contributor > Ministerio de Economía y Competitividad (Espanya) </ dc:contributor >

            8. < dc:subject > Imatges -- Processament </ dc:subject >

            9. < dc:subject > Image processing </ dc:subject >

            10. < dc:subject > Imatges -- Segmentació </ dc:subject >

            11. < dc:subject > Imaging segmentation </ dc:subject >

            12. < dc:subject > Visió per ordinador </ dc:subject >

            13. < dc:subject > Computer vision </ dc:subject >

            14. < dc:description > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ dc:description >

            15. < dc:date > 2016-11-17T07:23:07Z </ dc:date >

            16. < dc:date > 2016-11-17T07:23:07Z </ dc:date >

            17. < dc:date > 2017-01 </ dc:date >

            18. < dc:date > info:eu-repo/date/embargoEnd/2026-01-01 </ dc:date >

            19. < dc:type > info:eu-repo/semantics/article </ dc:type >

            20. < dc:identifier > 0031-3203 </ dc:identifier >

            21. < dc:identifier > http://hdl.handle.net/10256/13152 </ dc:identifier >

            22. < dc:identifier > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dc:identifier >

            23. < dc:identifier > 025389 </ dc:identifier >

            24. < dc:language > eng </ dc:language >

            25. < dc:relation > Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dc:relation >

            26. < dc:relation > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ dc:relation >

            27. < dc:relation > Articles publicats (D-ATC) </ dc:relation >

            28. < dc:relation > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ dc:relation >

            29. < dc:relation > FP7 </ dc:relation >

            30. < dc:relation > PANDORA </ dc:relation >

            31. < dc:relation > HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE </ dc:relation >

            32. < dc:relation > ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA </ dc:relation >

            33. < dc:relation > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ dc:relation >

            34. < dc:relation > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ dc:relation >

            35. < dc:rights > info:eu-repo/semantics/embargoedAccess </ dc:rights >

            36. < dc:rights > Tots els drets reservats </ dc:rights >

            37. < dc:publisher > Elsevier </ dc:publisher >

            </ oai_dc:dc >

          </ d:Statement >

        </ d:Descriptor >

      3. < d:Component id =" 10256_13152_1 " >

        1. < d:Resource mimeType =" application/pdf " ref =" https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf " />

        </ d:Component >

      </ d:Item >

    </ d:DIDL >

dim

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < dim:dim schemaLocation =" http://www.dspace.org/xmlns/dspace/dim http://www.dspace.org/schema/dim.xsd " >

    1. < dim:field element =" contributor " mdschema =" dc " qualifier =" author " > Muhammad Habib, Mahmood </ dim:field >

    2. < dim:field element =" contributor " mdschema =" dc " qualifier =" author " > Diez, Yago </ dim:field >

    3. < dim:field element =" contributor " mdschema =" dc " qualifier =" author " > Salvi, Joaquim </ dim:field >

    4. < dim:field element =" contributor " mdschema =" dc " qualifier =" author " > Lladó Bardera, Xavier </ dim:field >

    5. < dim:field element =" contributor " mdschema =" dc " qualifier =" funder " > Ministerio de Ciencia e Innovación (Espanya) </ dim:field >

    6. < dim:field element =" contributor " mdschema =" dc " qualifier =" funder " > Ministerio de Economía y Competitividad (Espanya) </ dim:field >

    7. < dim:field element =" date " mdschema =" dc " qualifier =" accessioned " > 2016-11-17T07:23:07Z </ dim:field >

    8. < dim:field element =" date " mdschema =" dc " qualifier =" available " > 2016-11-17T07:23:07Z </ dim:field >

    9. < dim:field element =" date " mdschema =" dc " qualifier =" issued " > 2017-01 </ dim:field >

    10. < dim:field element =" date " mdschema =" dc " qualifier =" embargoEndDate " > info:eu-repo/date/embargoEnd/2026-01-01 </ dim:field >

    11. < dim:field element =" identifier " mdschema =" dc " qualifier =" issn " > 0031-3203 </ dim:field >

    12. < dim:field element =" identifier " mdschema =" dc " qualifier =" uri " > http://hdl.handle.net/10256/13152 </ dim:field >

    13. < dim:field element =" identifier " mdschema =" dc " qualifier =" doi " > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dim:field >

    14. < dim:field element =" identifier " mdschema =" dc " qualifier =" idgrec " > 025389 </ dim:field >

    15. < dim:field element =" description " mdschema =" dc " qualifier =" abstract " > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ dim:field >

    16. < dim:field element =" description " mdschema =" dc " qualifier =" provenance " > Submitted by Claudia Plana (claudia.plana@udg.edu) on 2016-11-17T07:23:07Z No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5) </ dim:field >

    17. < dim:field element =" description " mdschema =" dc " qualifier =" provenance " > Made available in DSpace on 2016-11-17T07:23:07Z (GMT). No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5) Previous issue date: 2017-01 </ dim:field >

    18. < dim:field element =" description " mdschema =" dc " qualifier =" sponsorship " > This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R) </ dim:field >

    19. < dim:field element =" format " mdschema =" dc " qualifier =" mimetype " > application/pdf </ dim:field >

    20. < dim:field element =" language " mdschema =" dc " qualifier =" iso " > eng </ dim:field >

    21. < dim:field element =" publisher " mdschema =" dc " > Elsevier </ dim:field >

    22. < dim:field element =" relation " mdschema =" dc " > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ dim:field >

    23. < dim:field element =" relation " mdschema =" dc " > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ dim:field >

    24. < dim:field element =" relation " mdschema =" dc " qualifier =" isformatof " > Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dim:field >

    25. < dim:field element =" relation " mdschema =" dc " qualifier =" ispartof " > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ dim:field >

    26. < dim:field element =" relation " mdschema =" dc " qualifier =" ispartofseries " > Articles publicats (D-ATC) </ dim:field >

    27. < dim:field element =" relation " mdschema =" dc " qualifier =" projectID " > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ dim:field >

    28. < dim:field element =" relation " mdschema =" dc " qualifier =" FundingProgramme " > FP7 </ dim:field >

    29. < dim:field element =" relation " mdschema =" dc " qualifier =" ProjectAcronym " > PANDORA </ dim:field >

    30. < dim:field element =" relation " mdschema =" dc " qualifier =" ProjectAcronym " > HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE </ dim:field >

    31. < dim:field element =" relation " mdschema =" dc " qualifier =" ProjectAcronym " > ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA </ dim:field >

    32. < dim:field element =" rights " mdschema =" dc " > Tots els drets reservats </ dim:field >

    33. < dim:field element =" rights " mdschema =" dc " qualifier =" accessRights " > info:eu-repo/semantics/embargoedAccess </ dim:field >

    34. < dim:field element =" subject " mdschema =" dc " > Imatges -- Processament </ dim:field >

    35. < dim:field element =" subject " mdschema =" dc " > Image processing </ dim:field >

    36. < dim:field element =" subject " mdschema =" dc " > Imatges -- Segmentació </ dim:field >

    37. < dim:field element =" subject " mdschema =" dc " > Imaging segmentation </ dim:field >

    38. < dim:field element =" subject " mdschema =" dc " > Visió per ordinador </ dim:field >

    39. < dim:field element =" subject " mdschema =" dc " > Computer vision </ dim:field >

    40. < dim:field element =" title " mdschema =" dc " > A collection of challenging motion segmentation benchmark datasets </ dim:field >

    41. < dim:field element =" type " mdschema =" dc " > info:eu-repo/semantics/article </ dim:field >

    42. < dim:field element =" type " mdschema =" dc " qualifier =" version " > info:eu-repo/semantics/publishedVersion </ dim:field >

    43. < dim:field element =" embargo " mdschema =" dc " qualifier =" terms " > Cap </ dim:field >

    </ dim:dim >

edm

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < rdf:RDF schemaLocation =" http://www.w3.org/1999/02/22-rdf-syntax-ns# http://www.europeana.eu/schemas/edm/EDM.xsd " >

    1. < edm:ProvidedCHO about =" https://catalonica.bnc.cat/catalonicahub/lod/oai:dugi-doc.udg.edu:10256_--_13152#ent0 " >

      1. < dc:contributor > Ministerio de Ciencia e Innovación (Espanya) </ dc:contributor >

      2. < dc:contributor > Ministerio de Economía y Competitividad (Espanya) </ dc:contributor >

      3. < dc:creator > Muhammad Habib, Mahmood </ dc:creator >

      4. < dc:creator > Diez, Yago </ dc:creator >

      5. < dc:creator > Salvi, Joaquim </ dc:creator >

      6. < dc:creator > Lladó Bardera, Xavier </ dc:creator >

      7. < dc:date > info:eu-repo/date/embargoEnd/2026-01-01 </ dc:date >

      8. < dc:date > 2017-01 </ dc:date >

      9. < dc:description > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ dc:description >

      10. < dc:description > This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R) </ dc:description >

      11. < dc:identifier > http://hdl.handle.net/10256/13152 </ dc:identifier >

      12. < dc:language > eng </ dc:language >

      13. < dc:publisher > Elsevier </ dc:publisher >

      14. < dc:relation > info:eu-repo/semantics/altIdentifier/doi/10.1016/j.patcog.2016.07.008 </ dc:relation >

      15. < dc:relation > info:eu-repo/semantics/altIdentifier/issn/0031-3203 </ dc:relation >

      16. < dc:relation > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ dc:relation >

      17. < dc:relation > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ dc:relation >

      18. < dc:relation > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ dc:relation >

      19. < dc:rights > Tots els drets reservats </ dc:rights >

      20. < dc:rights > info:eu-repo/semantics/embargoedAccess </ dc:rights >

      21. < dc:source > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ dc:source >

      22. < dc:source > Articles publicats (D-ATC) </ dc:source >

      23. < dc:subject > Imatges -- Processament </ dc:subject >

      24. < dc:subject > Image processing </ dc:subject >

      25. < dc:subject > Imatges -- Segmentació </ dc:subject >

      26. < dc:subject > Imaging segmentation </ dc:subject >

      27. < dc:subject > Visió per ordinador </ dc:subject >

      28. < dc:subject > Computer vision </ dc:subject >

      29. < dc:title > A collection of challenging motion segmentation benchmark datasets </ dc:title >

      30. < dc:type > info:eu-repo/semantics/article </ dc:type >

      31. < dc:type > info:eu-repo/semantics/publishedVersion </ dc:type >

      32. < edm:type > TEXT </ edm:type >

      </ edm:ProvidedCHO >

    2. < ore:Aggregation about =" https://catalonica.bnc.cat/catalonicahub/lod/oai:dugi-doc.udg.edu:10256_--_13152#ent1 " >

      1. < edm:aggregatedCHO resource =" https://catalonica.bnc.cat/catalonicahub/lod/oai:dugi-doc.udg.edu:10256_--_13152#ent0 " />
      2. < edm:dataProvider > DUGiDocs. Recerca </ edm:dataProvider >

      3. < edm:isShownAt resource =" http://hdl.handle.net/10256/13152 " />
      4. < edm:isShownBy resource =" https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf " />
      5. < edm:object resource =" https://dugi-doc.udg.edu/bitstream/10256/13152/3/CollectionChallengingMotion.pdf.jpg " />
      6. < edm:provider > Catalònica </ edm:provider >

      7. < edm:rights resource =" http://creativecommons.org/licenses/by-nc-nd/4.0/ " />

      </ ore:Aggregation >

    </ rdf:RDF >

etdms

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < thesis schemaLocation =" http://www.ndltd.org/standards/metadata/etdms/1.0/ http://www.ndltd.org/standards/metadata/etdms/1.0/etdms.xsd " >

    1. < title > A collection of challenging motion segmentation benchmark datasets </ title >

    2. < creator > Muhammad Habib, Mahmood </ creator >

    3. < creator > Diez, Yago </ creator >

    4. < creator > Salvi, Joaquim </ creator >

    5. < creator > Lladó Bardera, Xavier </ creator >

    6. < contributor > Ministerio de Ciencia e Innovación (Espanya) </ contributor >

    7. < contributor > Ministerio de Economía y Competitividad (Espanya) </ contributor >

    8. < subject > Imatges -- Processament </ subject >

    9. < subject > Image processing </ subject >

    10. < subject > Imatges -- Segmentació </ subject >

    11. < subject > Imaging segmentation </ subject >

    12. < subject > Visió per ordinador </ subject >

    13. < subject > Computer vision </ subject >

    14. < description > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ description >

    15. < date > 2016-11-17 </ date >

    16. < date > 2016-11-17 </ date >

    17. < date > 2017-01 </ date >

    18. < date > info:eu-re </ date >

    19. < type > info:eu-repo/semantics/article </ type >

    20. < identifier > 0031-3203 </ identifier >

    21. < identifier > http://hdl.handle.net/10256/13152 </ identifier >

    22. < identifier > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ identifier >

    23. < identifier > 025389 </ identifier >

    24. < language > eng </ language >

    25. < relation > Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ relation >

    26. < relation > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ relation >

    27. < relation > Articles publicats (D-ATC) </ relation >

    28. < relation > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ relation >

    29. < relation > FP7 </ relation >

    30. < relation > PANDORA </ relation >

    31. < relation > HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE </ relation >

    32. < relation > ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA </ relation >

    33. < relation > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ relation >

    34. < relation > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ relation >

    35. < rights > info:eu-repo/semantics/embargoedAccess </ rights >

    36. < rights > Tots els drets reservats </ rights >

    37. < publisher > Elsevier </ publisher >

    </ thesis >

marc

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < record schemaLocation =" http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd " >

    1. < leader > 00925njm 22002777a 4500 </ leader >

    2. < datafield ind1 =" " ind2 =" " tag =" 042 " >

      1. < subfield code =" a " > dc </ subfield >

      </ datafield >

    3. < datafield ind1 =" " ind2 =" " tag =" 720 " >

      1. < subfield code =" a " > Muhammad Habib, Mahmood </ subfield >

      2. < subfield code =" e " > author </ subfield >

      </ datafield >

    4. < datafield ind1 =" " ind2 =" " tag =" 720 " >

      1. < subfield code =" a " > Diez, Yago </ subfield >

      2. < subfield code =" e " > author </ subfield >

      </ datafield >

    5. < datafield ind1 =" " ind2 =" " tag =" 720 " >

      1. < subfield code =" a " > Salvi, Joaquim </ subfield >

      2. < subfield code =" e " > author </ subfield >

      </ datafield >

    6. < datafield ind1 =" " ind2 =" " tag =" 720 " >

      1. < subfield code =" a " > Lladó Bardera, Xavier </ subfield >

      2. < subfield code =" e " > author </ subfield >

      </ datafield >

    7. < datafield ind1 =" " ind2 =" " tag =" 260 " >

      1. < subfield code =" c " > 2017-01 </ subfield >

      </ datafield >

    8. < datafield ind1 =" " ind2 =" " tag =" 520 " >

      1. < subfield code =" a " > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ subfield >

      </ datafield >

    9. < datafield ind1 =" 8 " ind2 =" " tag =" 024 " >

      1. < subfield code =" a " > 0031-3203 </ subfield >

      </ datafield >

    10. < datafield ind1 =" 8 " ind2 =" " tag =" 024 " >

      1. < subfield code =" a " > http://hdl.handle.net/10256/13152 </ subfield >

      </ datafield >

    11. < datafield ind1 =" 8 " ind2 =" " tag =" 024 " >

      1. < subfield code =" a " > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ subfield >

      </ datafield >

    12. < datafield ind1 =" 8 " ind2 =" " tag =" 024 " >

      1. < subfield code =" a " > 025389 </ subfield >

      </ datafield >

    13. < datafield ind1 =" " ind2 =" " tag =" 653 " >

      1. < subfield code =" a " > Imatges -- Processament </ subfield >

      </ datafield >

    14. < datafield ind1 =" " ind2 =" " tag =" 653 " >

      1. < subfield code =" a " > Image processing </ subfield >

      </ datafield >

    15. < datafield ind1 =" " ind2 =" " tag =" 653 " >

      1. < subfield code =" a " > Imatges -- Segmentació </ subfield >

      </ datafield >

    16. < datafield ind1 =" " ind2 =" " tag =" 653 " >

      1. < subfield code =" a " > Imaging segmentation </ subfield >

      </ datafield >

    17. < datafield ind1 =" " ind2 =" " tag =" 653 " >

      1. < subfield code =" a " > Visió per ordinador </ subfield >

      </ datafield >

    18. < datafield ind1 =" " ind2 =" " tag =" 653 " >

      1. < subfield code =" a " > Computer vision </ subfield >

      </ datafield >

    19. < datafield ind1 =" 0 " ind2 =" 0 " tag =" 245 " >

      1. < subfield code =" a " > A collection of challenging motion segmentation benchmark datasets </ subfield >

      </ datafield >

    </ record >

mets

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < mets ID =" DSpace_ITEM_10256-13152 " OBJID =" hdl:10256/13152 " PROFILE =" DSpace METS SIP Profile 1.0 " TYPE =" DSpace ITEM " schemaLocation =" http://www.loc.gov/METS/ http://www.loc.gov/standards/mets/mets.xsd " >

    1. < metsHdr CREATEDATE =" 2024-10-26T00:41:00Z " >

      1. < agent ROLE =" CUSTODIAN " TYPE =" ORGANIZATION " >

        1. < name > DUGiDocs </ name >

        </ agent >

      </ metsHdr >

    2. < dmdSec ID =" DMD_10256_13152 " >

      1. < mdWrap MDTYPE =" MODS " >

        1. < xmlData schemaLocation =" http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd " >

          1. < mods:mods schemaLocation =" http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd " >

            1. < mods:name >

              1. < mods:role >

                1. < mods:roleTerm type =" text " > author </ mods:roleTerm >

                </ mods:role >

              2. < mods:namePart > Muhammad Habib, Mahmood </ mods:namePart >

              </ mods:name >

            2. < mods:name >

              1. < mods:role >

                1. < mods:roleTerm type =" text " > author </ mods:roleTerm >

                </ mods:role >

              2. < mods:namePart > Diez, Yago </ mods:namePart >

              </ mods:name >

            3. < mods:name >

              1. < mods:role >

                1. < mods:roleTerm type =" text " > author </ mods:roleTerm >

                </ mods:role >

              2. < mods:namePart > Salvi, Joaquim </ mods:namePart >

              </ mods:name >

            4. < mods:name >

              1. < mods:role >

                1. < mods:roleTerm type =" text " > author </ mods:roleTerm >

                </ mods:role >

              2. < mods:namePart > Lladó Bardera, Xavier </ mods:namePart >

              </ mods:name >

            5. < mods:name >

              1. < mods:role >

                1. < mods:roleTerm type =" text " > funder </ mods:roleTerm >

                </ mods:role >

              2. < mods:namePart > Ministerio de Ciencia e Innovación (Espanya) </ mods:namePart >

              </ mods:name >

            6. < mods:name >

              1. < mods:role >

                1. < mods:roleTerm type =" text " > funder </ mods:roleTerm >

                </ mods:role >

              2. < mods:namePart > Ministerio de Economía y Competitividad (Espanya) </ mods:namePart >

              </ mods:name >

            7. < mods:extension >

              1. < mods:dateAccessioned encoding =" iso8601 " > 2016-11-17T07:23:07Z </ mods:dateAccessioned >

              </ mods:extension >

            8. < mods:extension >

              1. < mods:dateAvailable encoding =" iso8601 " > 2016-11-17T07:23:07Z </ mods:dateAvailable >

              </ mods:extension >

            9. < mods:originInfo >

              1. < mods:dateIssued encoding =" iso8601 " > 2017-01 </ mods:dateIssued >

              </ mods:originInfo >

            10. < mods:identifier type =" issn " > 0031-3203 </ mods:identifier >

            11. < mods:identifier type =" uri " > http://hdl.handle.net/10256/13152 </ mods:identifier >

            12. < mods:identifier type =" doi " > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ mods:identifier >

            13. < mods:identifier type =" idgrec " > 025389 </ mods:identifier >

            14. < mods:abstract > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ mods:abstract >

            15. < mods:language >

              1. < mods:languageTerm authority =" rfc3066 " > eng </ mods:languageTerm >

              </ mods:language >

            16. < mods:accessCondition type =" useAndReproduction " > Tots els drets reservats </ mods:accessCondition >

            17. < mods:subject >

              1. < mods:topic > Imatges -- Processament </ mods:topic >

              </ mods:subject >

            18. < mods:subject >

              1. < mods:topic > Image processing </ mods:topic >

              </ mods:subject >

            19. < mods:subject >

              1. < mods:topic > Imatges -- Segmentació </ mods:topic >

              </ mods:subject >

            20. < mods:subject >

              1. < mods:topic > Imaging segmentation </ mods:topic >

              </ mods:subject >

            21. < mods:subject >

              1. < mods:topic > Visió per ordinador </ mods:topic >

              </ mods:subject >

            22. < mods:subject >

              1. < mods:topic > Computer vision </ mods:topic >

              </ mods:subject >

            23. < mods:titleInfo >

              1. < mods:title > A collection of challenging motion segmentation benchmark datasets </ mods:title >

              </ mods:titleInfo >

            24. < mods:genre > info:eu-repo/semantics/article </ mods:genre >

            </ mods:mods >

          </ xmlData >

        </ mdWrap >

      </ dmdSec >

    3. < amdSec ID =" TMD_10256_13152 " >

      1. < rightsMD ID =" RIG_10256_13152 " >

        1. < mdWrap MDTYPE =" OTHER " MIMETYPE =" text/plain " OTHERMDTYPE =" DSpaceDepositLicense " >

          1. < binData > Q29uZGljaW9ucyBkZWwgZGlww7JzaXQKCgpQZXIgcG9kZXIgcHVibGljYXIgZWwgZG9jdW1lbnQgYWwgRFVHaSBlbnMgY2FsIHVuYSBhdXRvcml0emFjacOzIHZvc3RyYSBwZXIgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVsIHRleHQgZW4gbGVzIGNvbmRpY2lvbnMgc2Vnw7xlbnRzOgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGEgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVscyBkb2N1bWVudHMsIGRlIGZvcm1hIMOtbnRlZ3JhIG8gcGFyY2lhbCwgc2Vuc2Ugb2J0ZW5pciBjYXAgYmVuZWZpY2kgY29tZXJjaWFsLCDDum5pY2FtZW50IGFtYiBmaW5hbGl0YXRzIGRlIHJlY2VyY2EgaSBzdXBvcnQgbyBpbOKAomx1c3RyYWNpw7MgZGUgbGEgZG9jw6huY2lhLCBtaXRqYW7Dp2FudCBsYSBpbmNvcnBvcmFjacOzIGRlbHMgZG9jdW1lbnRzIGEgdW5hIGJhc2UgZGUgZGFkZXMgZWxlY3Ryw7JuaWNhIGTigJlhY2PDqXMgb2JlcnQuCgoKUGVyIGEgYXF1ZXN0ZXMgZmluYWxpdGF0cyBjZWRlaXhvIGRlIGZvcm1hIG5vIGV4Y2x1c2l2YSwgc2Vuc2UgbMOtbWl0IHRlbXBvcmFsIG5pIHRlcnJpdG9yaWFsLCBlbHMgZHJldHMgZOKAmWV4cGxvdGFjacOzIHF1ZSBlbSBjb3JyZXNwb25lbiBjb20gYSBhdXRvci9hLgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGxhIGPDsnBpYSBkZWxzIGRvY3VtZW50cyBlbiB1biBhbHRyZSBzdXBvcnQsIGFkYXB0YXItbG9zIG8gdHJhbnNmb3JtYXItbG9zIGFtYiBmaW5hbGl0YXRzIGRlIGNvbnNlcnZhY2nDsyBvIGRpZnVzacOzLCBpIGzigJlhY29yZCBhbWIgdGVyY2VyZXMgcGVyc29uZXMgcGVyIHJlYWxpdHphciBhcXVlc3RhIGNvbnNlcnZhY2nDsyBpIGRpZnVzacOzIHJlc3BlY3RhbnQgbGEgY2Vzc2nDsyBkZSBkcmV0cyBxdWUgYXJhIGVmZWN0dW8uCgoKLSBFbSByZXNlcnZvIGxhIHJlc3RhIGRlIGRyZXRzIGFscyBxdWFscyBubyBlcyBmYSByZWZlcsOobmNpYSBlbiBlbCBwcmVzZW50IGRvY3VtZW50LgoKCkxhIFVkRyBhZ3JhZWl4IGxhIHZvc3RyYSBjb2zigKJsYWJvcmFjacOzLgo= </ binData >

          </ mdWrap >

        </ rightsMD >

      </ amdSec >

    4. < amdSec ID =" FO_10256_13152_1 " >

      1. < techMD ID =" TECH_O_10256_13152_1 " >

        1. < mdWrap MDTYPE =" PREMIS " >

          1. < xmlData schemaLocation =" http://www.loc.gov/standards/premis http://www.loc.gov/standards/premis/PREMIS-v1-0.xsd " >

            1. < premis:premis >

              1. < premis:object >

                1. < premis:objectIdentifier >

                  1. < premis:objectIdentifierType > URL </ premis:objectIdentifierType >

                  2. < premis:objectIdentifierValue > https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf </ premis:objectIdentifierValue >

                  </ premis:objectIdentifier >

                2. < premis:objectCategory > File </ premis:objectCategory >

                3. < premis:objectCharacteristics >

                  1. < premis:fixity >

                    1. < premis:messageDigestAlgorithm > MD5 </ premis:messageDigestAlgorithm >

                    2. < premis:messageDigest > 431770cc55cfb7a9d9e3fbac7f90554c </ premis:messageDigest >

                    </ premis:fixity >

                  2. < premis:size > 6965871 </ premis:size >

                  3. < premis:format >

                    1. < premis:formatDesignation >

                      1. < premis:formatName > application/pdf </ premis:formatName >

                      </ premis:formatDesignation >

                    </ premis:format >

                  </ premis:objectCharacteristics >

                4. < premis:originalName > CollectionChallengingMotion.pdf </ premis:originalName >

                </ premis:object >

              </ premis:premis >

            </ xmlData >

          </ mdWrap >

        </ techMD >

      </ amdSec >

    5. < fileSec >

      1. < fileGrp USE =" ORIGINAL " >

        1. < file ADMID =" FO_10256_13152_1 " CHECKSUM =" 431770cc55cfb7a9d9e3fbac7f90554c " CHECKSUMTYPE =" MD5 " GROUPID =" GROUP_BITSTREAM_10256_13152_1 " ID =" BITSTREAM_ORIGINAL_10256_13152_1 " MIMETYPE =" application/pdf " SEQ =" 1 " SIZE =" 6965871 " >

          1. < FLocat LOCTYPE =" URL " href =" https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf " type =" simple " />

          </ file >

        </ fileGrp >

      </ fileSec >

    6. < structMap LABEL =" DSpace Object " TYPE =" LOGICAL " >

      1. < div ADMID =" DMD_10256_13152 " TYPE =" DSpace Object Contents " >

        1. < div TYPE =" DSpace BITSTREAM " >

          1. < fptr FILEID =" BITSTREAM_ORIGINAL_10256_13152_1 " />

          </ div >

        </ div >

      </ structMap >

    </ mets >

mods

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < mods:mods schemaLocation =" http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd " >

    1. < mods:name >

      1. < mods:namePart > Muhammad Habib, Mahmood </ mods:namePart >

      </ mods:name >

    2. < mods:name >

      1. < mods:namePart > Diez, Yago </ mods:namePart >

      </ mods:name >

    3. < mods:name >

      1. < mods:namePart > Salvi, Joaquim </ mods:namePart >

      </ mods:name >

    4. < mods:name >

      1. < mods:namePart > Lladó Bardera, Xavier </ mods:namePart >

      </ mods:name >

    5. < mods:extension >

      1. < mods:dateAvailable encoding =" iso8601 " > 2016-11-17T07:23:07Z </ mods:dateAvailable >

      </ mods:extension >

    6. < mods:extension >

      1. < mods:dateAccessioned encoding =" iso8601 " > 2016-11-17T07:23:07Z </ mods:dateAccessioned >

      </ mods:extension >

    7. < mods:originInfo >

      1. < mods:dateIssued encoding =" iso8601 " > 2017-01 </ mods:dateIssued >

      </ mods:originInfo >

    8. < mods:identifier type =" issn " > 0031-3203 </ mods:identifier >

    9. < mods:identifier type =" uri " > http://hdl.handle.net/10256/13152 </ mods:identifier >

    10. < mods:identifier type =" doi " > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ mods:identifier >

    11. < mods:identifier type =" idgrec " > 025389 </ mods:identifier >

    12. < mods:abstract > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ mods:abstract >

    13. < mods:language >

      1. < mods:languageTerm > eng </ mods:languageTerm >

      </ mods:language >

    14. < mods:accessCondition type =" useAndReproduction " > info:eu-repo/semantics/embargoedAccess </ mods:accessCondition >

    15. < mods:accessCondition type =" useAndReproduction " > Tots els drets reservats </ mods:accessCondition >

    16. < mods:subject >

      1. < mods:topic > Imatges -- Processament </ mods:topic >

      </ mods:subject >

    17. < mods:subject >

      1. < mods:topic > Image processing </ mods:topic >

      </ mods:subject >

    18. < mods:subject >

      1. < mods:topic > Imatges -- Segmentació </ mods:topic >

      </ mods:subject >

    19. < mods:subject >

      1. < mods:topic > Imaging segmentation </ mods:topic >

      </ mods:subject >

    20. < mods:subject >

      1. < mods:topic > Visió per ordinador </ mods:topic >

      </ mods:subject >

    21. < mods:subject >

      1. < mods:topic > Computer vision </ mods:topic >

      </ mods:subject >

    22. < mods:titleInfo >

      1. < mods:title > A collection of challenging motion segmentation benchmark datasets </ mods:title >

      </ mods:titleInfo >

    23. < mods:genre > info:eu-repo/semantics/article </ mods:genre >

    </ mods:mods >

oai_datacite

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < datacite:resource schemaLocation =" http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4-2/metadata.xsd " >

    1. < datacite:identifier identifierType =" Handle " > http://hdl.handle.net/10256/13152 </ datacite:identifier >

    2. < datacite:titles >

      1. < datacite:title > A collection of challenging motion segmentation benchmark datasets </ datacite:title >

      </ datacite:titles >

    3. < datacite:creators >

      1. < datacite:creator >

        1. < datacite:creatorName > Muhammad Habib, Mahmood </ datacite:creatorName >

        </ datacite:creator >

      2. < datacite:creator >

        1. < datacite:creatorName > Diez, Yago </ datacite:creatorName >

        2. < datacite:nameIdentifier nameIdentifierScheme =" ORCID " schemeURI =" http://orcid.org/ " > 0000-0003-4521-9113 </ datacite:nameIdentifier >

        </ datacite:creator >

      3. < datacite:creator >

        1. < datacite:creatorName > Salvi, Joaquim </ datacite:creatorName >

        2. < datacite:nameIdentifier nameIdentifierScheme =" ORCID " schemeURI =" http://orcid.org/ " > 0000-0002-9482-7126 </ datacite:nameIdentifier >

        </ datacite:creator >

      4. < datacite:creator >

        1. < datacite:creatorName > Lladó Bardera, Xavier </ datacite:creatorName >

        2. < datacite:nameIdentifier nameIdentifierScheme =" ORCID " schemeURI =" http://orcid.org/ " > 0000-0003-2777-3479 </ datacite:nameIdentifier >

        </ datacite:creator >

      </ datacite:creators >

    4. < datacite:contributors />
    5. < datacite:subjects >

      1. < datacite:subject > Imatges -- Processament </ datacite:subject >

      2. < datacite:subject > Image processing </ datacite:subject >

      3. < datacite:subject > Imatges -- Segmentació </ datacite:subject >

      4. < datacite:subject > Imaging segmentation </ datacite:subject >

      5. < datacite:subject > Visió per ordinador </ datacite:subject >

      6. < datacite:subject > Computer vision </ datacite:subject >

      </ datacite:subjects >

    6. < datacite:descriptions >

      1. < datacite:description descriptionType =" Abstract " > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ datacite:description >

      </ datacite:descriptions >

    7. < datacite:dates >

      1. < datacite:date dateType =" Issued " > 2017-01 </ datacite:date >

      </ datacite:dates >

    8. < datacite:publicationYear > 2017 </ datacite:publicationYear >

    9. < datacite:resourceType resourceTypeGeneral =" Dataset " />
    10. < datacite:languages >

      1. < datacite:language > eng </ datacite:language >

      </ datacite:languages >

    11. < datacite:relatedIdentifiers >

      1. < datacite:relatedIdentifier relatedIdentifierType =" URL " relationType =" IsSupplementTo " > 0031-3203 </ datacite:relatedIdentifier >

      2. < datacite:relatedIdentifier relatedIdentifierType =" DOI " relationType =" IsSupplementTo " > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ datacite:relatedIdentifier >

      3. < datacite:relatedIdentifier relatedIdentifierType =" URL " relationType =" IsSupplementTo " > 025389 </ datacite:relatedIdentifier >

      </ datacite:relatedIdentifiers >

    12. < datacite:rightsList >

      1. < datacite:rights > Tots els drets reservats </ datacite:rights >

      2. < datacite:rights rightsURI =" info:eu-repo/semantics/embargoedAccess " > info:eu-repo/semantics/embargoedAccess </ datacite:rights >

      </ datacite:rightsList >

    13. < datacite:formats >

      1. < datacite:format > application/pdf </ datacite:format >

      </ datacite:formats >

    14. < datacite:publisher > Elsevier </ datacite:publisher >

    </ datacite:resource >

ore

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < atom:entry schemaLocation =" http://www.w3.org/2005/Atom http://www.kbcafe.com/rss/atom.xsd.xml " >

    1. < atom:id > http://hdl.handle.net/10256/13152/ore.xml </ atom:id >

    2. < atom:link href =" http://hdl.handle.net/10256/13152 " rel =" alternate " />
    3. < atom:link href =" http://hdl.handle.net/10256/13152/ore.xml " rel =" http://www.openarchives.org/ore/terms/describes " />
    4. < atom:link href =" http://hdl.handle.net/10256/13152/ore.xml#atom " rel =" self " type =" application/atom+xml " />
    5. < atom:published > 2016-11-17T07:23:07Z </ atom:published >

    6. < atom:updated > 2016-11-17T07:23:07Z </ atom:updated >

    7. < atom:source >

      1. < atom:generator > DUGiDocs </ atom:generator >

      </ atom:source >

    8. < atom:title > A collection of challenging motion segmentation benchmark datasets </ atom:title >

    9. < atom:author >

      1. < atom:name > Muhammad Habib, Mahmood </ atom:name >

      </ atom:author >

    10. < atom:author >

      1. < atom:name > Diez, Yago </ atom:name >

      </ atom:author >

    11. < atom:author >

      1. < atom:name > Salvi, Joaquim </ atom:name >

      </ atom:author >

    12. < atom:author >

      1. < atom:name > Lladó Bardera, Xavier </ atom:name >

      </ atom:author >

    13. < atom:category label =" Aggregation " scheme =" http://www.openarchives.org/ore/terms/ " term =" http://www.openarchives.org/ore/terms/Aggregation " />
    14. < atom:category scheme =" http://www.openarchives.org/ore/atom/modified " term =" 2016-11-17T07:23:07Z " />
    15. < atom:category label =" DSpace Item " scheme =" http://www.dspace.org/objectModel/ " term =" DSpaceItem " />
    16. < atom:link href =" https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf " length =" 6965871 " rel =" http://www.openarchives.org/ore/terms/aggregates " title =" CollectionChallengingMotion.pdf " type =" application/pdf " />
    17. < oreatom:triples >

      1. < rdf:Description about =" http://hdl.handle.net/10256/13152/ore.xml#atom " >

        1. < rdf:type resource =" http://www.dspace.org/objectModel/DSpaceItem " />
        2. < dcterms:modified > 2016-11-17T07:23:07Z </ dcterms:modified >

        </ rdf:Description >

      2. < rdf:Description about =" https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf " >

        1. < rdf:type resource =" http://www.dspace.org/objectModel/DSpaceBitstream " />
        2. < dcterms:description > ORIGINAL </ dcterms:description >

        </ rdf:Description >

      3. < rdf:Description about =" https://dugi-doc.udg.edu/bitstream/10256/13152/2/license.txt " >

        1. < rdf:type resource =" http://www.dspace.org/objectModel/DSpaceBitstream " />
        2. < dcterms:description > LICENSE </ dcterms:description >

        </ rdf:Description >

      4. < rdf:Description about =" https://dugi-doc.udg.edu/bitstream/10256/13152/3/CollectionChallengingMotion.pdf.jpg " >

        1. < rdf:type resource =" http://www.dspace.org/objectModel/DSpaceBitstream " />
        2. < dcterms:description > THUMBNAIL </ dcterms:description >

        </ rdf:Description >

      </ oreatom:triples >

    </ atom:entry >

qdc

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < qdc:qualifieddc schemaLocation =" http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd " >

    1. < dc:title > A collection of challenging motion segmentation benchmark datasets </ dc:title >

    2. < dc:creator > Muhammad Habib, Mahmood </ dc:creator >

    3. < dc:creator > Diez, Yago </ dc:creator >

    4. < dc:creator > Salvi, Joaquim </ dc:creator >

    5. < dc:creator > Lladó Bardera, Xavier </ dc:creator >

    6. < dc:contributor > Ministerio de Ciencia e Innovación (Espanya) </ dc:contributor >

    7. < dc:contributor > Ministerio de Economía y Competitividad (Espanya) </ dc:contributor >

    8. < dc:subject > Imatges -- Processament </ dc:subject >

    9. < dc:subject > Image processing </ dc:subject >

    10. < dc:subject > Imatges -- Segmentació </ dc:subject >

    11. < dc:subject > Imaging segmentation </ dc:subject >

    12. < dc:subject > Visió per ordinador </ dc:subject >

    13. < dc:subject > Computer vision </ dc:subject >

    14. < dcterms:abstract > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ dcterms:abstract >

    15. < dcterms:issued > 2017-01 </ dcterms:issued >

    16. < dc:type > info:eu-repo/semantics/article </ dc:type >

    17. < dc:identifier > 0031-3203 </ dc:identifier >

    18. < dc:identifier > http://hdl.handle.net/10256/13152 </ dc:identifier >

    19. < dc:identifier > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dc:identifier >

    20. < dc:identifier > 025389 </ dc:identifier >

    21. < dc:language > eng </ dc:language >

    22. < dc:relation > Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dc:relation >

    23. < dc:relation > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ dc:relation >

    24. < dc:relation > Articles publicats (D-ATC) </ dc:relation >

    25. < dc:relation > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ dc:relation >

    26. < dc:relation > FP7 </ dc:relation >

    27. < dc:relation > PANDORA </ dc:relation >

    28. < dc:relation > HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE </ dc:relation >

    29. < dc:relation > ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA </ dc:relation >

    30. < dc:relation > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ dc:relation >

    31. < dc:relation > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ dc:relation >

    32. < dc:rights > info:eu-repo/semantics/embargoedAccess </ dc:rights >

    33. < dc:rights > Tots els drets reservats </ dc:rights >

    34. < dc:publisher > Elsevier </ dc:publisher >

    </ qdc:qualifieddc >

rdf

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < rdf:RDF schemaLocation =" http://www.openarchives.org/OAI/2.0/rdf/ http://www.openarchives.org/OAI/2.0/rdf.xsd " >

    1. < ow:Publication about =" oai:dugi-doc.udg.edu:10256/13152 " >

      1. < dc:title > A collection of challenging motion segmentation benchmark datasets </ dc:title >

      2. < dc:creator > Muhammad Habib, Mahmood </ dc:creator >

      3. < dc:creator > Diez, Yago </ dc:creator >

      4. < dc:creator > Salvi, Joaquim </ dc:creator >

      5. < dc:creator > Lladó Bardera, Xavier </ dc:creator >

      6. < dc:contributor > Ministerio de Ciencia e Innovación (Espanya) </ dc:contributor >

      7. < dc:contributor > Ministerio de Economía y Competitividad (Espanya) </ dc:contributor >

      8. < dc:subject > Imatges -- Processament </ dc:subject >

      9. < dc:subject > Image processing </ dc:subject >

      10. < dc:subject > Imatges -- Segmentació </ dc:subject >

      11. < dc:subject > Imaging segmentation </ dc:subject >

      12. < dc:subject > Visió per ordinador </ dc:subject >

      13. < dc:subject > Computer vision </ dc:subject >

      14. < dc:description > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ dc:description >

      15. < dc:date > 2016-11-17T07:23:07Z </ dc:date >

      16. < dc:date > 2016-11-17T07:23:07Z </ dc:date >

      17. < dc:date > 2017-01 </ dc:date >

      18. < dc:date > info:eu-repo/date/embargoEnd/2026-01-01 </ dc:date >

      19. < dc:type > info:eu-repo/semantics/article </ dc:type >

      20. < dc:identifier > 0031-3203 </ dc:identifier >

      21. < dc:identifier > http://hdl.handle.net/10256/13152 </ dc:identifier >

      22. < dc:identifier > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dc:identifier >

      23. < dc:identifier > 025389 </ dc:identifier >

      24. < dc:language > eng </ dc:language >

      25. < dc:relation > Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ dc:relation >

      26. < dc:relation > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ dc:relation >

      27. < dc:relation > Articles publicats (D-ATC) </ dc:relation >

      28. < dc:relation > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ dc:relation >

      29. < dc:relation > FP7 </ dc:relation >

      30. < dc:relation > PANDORA </ dc:relation >

      31. < dc:relation > HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE </ dc:relation >

      32. < dc:relation > ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA </ dc:relation >

      33. < dc:relation > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ dc:relation >

      34. < dc:relation > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ dc:relation >

      35. < dc:rights > info:eu-repo/semantics/embargoedAccess </ dc:rights >

      36. < dc:rights > Tots els drets reservats </ dc:rights >

      37. < dc:publisher > Elsevier </ dc:publisher >

      </ ow:Publication >

    </ rdf:RDF >

xoai

Download XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. < metadata schemaLocation =" http://www.lyncode.com/xoai http://www.lyncode.com/xsd/xoai.xsd " >

    1. < element name =" dc " >

      1. < element name =" contributor " >

        1. < element name =" author " >

          1. < element name =" none " >

            1. < field name =" value " > Muhammad Habib, Mahmood </ field >

            2. < field name =" value " > Diez, Yago </ field >

            3. < field name =" value " > Salvi, Joaquim </ field >

            4. < field name =" value " > Lladó Bardera, Xavier </ field >

            </ element >

          </ element >

        2. < element name =" funder " >

          1. < element name =" none " >

            1. < field name =" value " > Ministerio de Ciencia e Innovación (Espanya) </ field >

            2. < field name =" value " > Ministerio de Economía y Competitividad (Espanya) </ field >

            </ element >

          </ element >

        </ element >

      2. < element name =" date " >

        1. < element name =" accessioned " >

          1. < element name =" none " >

            1. < field name =" value " > 2016-11-17T07:23:07Z </ field >

            </ element >

          </ element >

        2. < element name =" available " >

          1. < element name =" none " >

            1. < field name =" value " > 2016-11-17T07:23:07Z </ field >

            </ element >

          </ element >

        3. < element name =" issued " >

          1. < element name =" none " >

            1. < field name =" value " > 2017-01 </ field >

            </ element >

          </ element >

        4. < element name =" embargoEndDate " >

          1. < element name =" none " >

            1. < field name =" value " > info:eu-repo/date/embargoEnd/2026-01-01 </ field >

            </ element >

          </ element >

        </ element >

      3. < element name =" identifier " >

        1. < element name =" issn " >

          1. < element name =" none " >

            1. < field name =" value " > 0031-3203 </ field >

            </ element >

          </ element >

        2. < element name =" uri " >

          1. < element name =" none " >

            1. < field name =" value " > http://hdl.handle.net/10256/13152 </ field >

            </ element >

          </ element >

        3. < element name =" doi " >

          1. < element name =" none " >

            1. < field name =" value " > http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ field >

            </ element >

          </ element >

        4. < element name =" idgrec " >

          1. < element name =" none " >

            1. < field name =" value " > 025389 </ field >

            </ element >

          </ element >

        </ element >

      4. < element name =" description " >

        1. < element name =" abstract " >

          1. < element name =" none " >

            1. < field name =" value " > An in-depth analysis of computer vision methodologies is greatly dependent on the benchmarks they are tested upon. Any dataset is as good as the diversity of the true nature of the problem enclosed in it. Motion segmentation is a preprocessing step in computer vision whose publicly available datasets have certain limitations. Some databases are not up-to-date with modern requirements of frame length and number of motions, and others do not have ample ground truth in them. In this paper, we present a collection of diverse multifaceted motion segmentation benchmarks containing trajectory- and region-based ground truth. These datasets enclose real-life long and short sequences, with increased number of motions and frames per sequence, and also real distortions with missing data. The ground truth is provided on all the frames of all the sequences. A comprehensive benchmark evaluation of the state-of-the-art motion segmentation algorithms is provided to establish the difficulty of the problem and to also contribute a starting point. All the resources of the datasets have been made publicly available at http://dixie.udg.edu/udgms/ </ field >

            </ element >

          </ element >

        2. < element name =" provenance " >

          1. < element name =" none " >

            1. < field name =" value " > Submitted by Claudia Plana (claudia.plana@udg.edu) on 2016-11-17T07:23:07Z No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5) </ field >

            2. < field name =" value " > Made available in DSpace on 2016-11-17T07:23:07Z (GMT). No. of bitstreams: 1 CollectionChallengingMotion.pdf: 6965871 bytes, checksum: 431770cc55cfb7a9d9e3fbac7f90554c (MD5) Previous issue date: 2017-01 </ field >

            </ element >

          </ element >

        3. < element name =" sponsorship " >

          1. < element name =" none " >

            1. < field name =" value " > This work is supported by the FP7-ICT-2011 7project PANDORA (Ref 288273) funded by the European Commission, two projects funded by the Ministry of Economy and Competitiveness of the Spanish Government. RAIMON (Ref CTM2011-29691-C02-02) and NICOLE (Ref TIN2014-55710-R) </ field >

            </ element >

          </ element >

        </ element >

      5. < element name =" format " >

        1. < element name =" mimetype " >

          1. < element name =" none " >

            1. < field name =" value " > application/pdf </ field >

            </ element >

          </ element >

        </ element >

      6. < element name =" language " >

        1. < element name =" iso " >

          1. < element name =" none " >

            1. < field name =" value " > eng </ field >

            </ element >

          </ element >

        </ element >

      7. < element name =" publisher " >

        1. < element name =" none " >

          1. < field name =" value " > Elsevier </ field >

          </ element >

        </ element >

      8. < element name =" relation " >

        1. < element name =" none " >

          1. < field name =" value " > info:eu-repo/grantAgreement/MICINN//CTM2011-29691-C02-02/ES/ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA/ </ field >

          2. < field name =" value " > info:eu-repo/grantAgreement/MINECO//TIN2014-55710-R/ES/HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE/ </ field >

          </ element >

        2. < element name =" isformatof " >

          1. < element name =" none " >

            1. < field name =" value " > Reproducció digital del document publicat a: http://dx.doi.org/10.1016/j.patcog.2016.07.008 </ field >

            </ element >

          </ element >

        3. < element name =" ispartof " >

          1. < element name =" none " >

            1. < field name =" value " > © Pattern Recognition, 2017, vol. 61, p. 1-14 </ field >

            </ element >

          </ element >

        4. < element name =" ispartofseries " >

          1. < element name =" none " >

            1. < field name =" value " > Articles publicats (D-ATC) </ field >

            </ element >

          </ element >

        5. < element name =" projectID " >

          1. < element name =" none " >

            1. < field name =" value " > info:eu-repo/grantAgreement/EC/FP7/288273/EU/Persistent Autonomy through Learning, Adaptation, Observation and Re-planning/PANDORA </ field >

            </ element >

          </ element >

        6. < element name =" FundingProgramme " >

          1. < element name =" none " >

            1. < field name =" value " > FP7 </ field >

            </ element >

          </ element >

        7. < element name =" ProjectAcronym " >

          1. < element name =" none " >

            1. < field name =" value " > PANDORA </ field >

            2. < field name =" value " > HERRAMIENTAS DE NEUROIMAGEN PARA MEJORAR EL DIAGNOSIS Y EL SEGUIMIENTO CLINICO DE LOS PACIENTES CON ESCLEROSIS MULTIPLE </ field >

            3. < field name =" value " > ROBOT AUTONOMO SUBMARINO PARA LA INSPECCION Y MONITORIZACION DE EXPLOTACIONES DE ACUICULTURA MARINA </ field >

            </ element >

          </ element >

        </ element >

      9. < element name =" rights " >

        1. < element name =" none " >

          1. < field name =" value " > Tots els drets reservats </ field >

          </ element >

        2. < element name =" accessRights " >

          1. < element name =" none " >

            1. < field name =" value " > info:eu-repo/semantics/embargoedAccess </ field >

            </ element >

          </ element >

        </ element >

      10. < element name =" subject " >

        1. < element name =" none " >

          1. < field name =" value " > Imatges -- Processament </ field >

          2. < field name =" value " > Image processing </ field >

          3. < field name =" value " > Imatges -- Segmentació </ field >

          4. < field name =" value " > Imaging segmentation </ field >

          5. < field name =" value " > Visió per ordinador </ field >

          6. < field name =" value " > Computer vision </ field >

          </ element >

        </ element >

      11. < element name =" title " >

        1. < element name =" none " >

          1. < field name =" value " > A collection of challenging motion segmentation benchmark datasets </ field >

          </ element >

        </ element >

      12. < element name =" type " >

        1. < element name =" none " >

          1. < field name =" value " > info:eu-repo/semantics/article </ field >

          </ element >

        2. < element name =" version " >

          1. < element name =" none " >

            1. < field name =" value " > info:eu-repo/semantics/publishedVersion </ field >

            </ element >

          </ element >

        </ element >

      13. < element name =" embargo " >

        1. < element name =" terms " >

          1. < element name =" none " >

            1. < field name =" value " > Cap </ field >

            </ element >

          </ element >

        </ element >

      </ element >

    2. < element name =" adm " >

      1. < element name =" sets " >

        1. < element name =" hidden " >

          1. < element name =" none " >

            1. < field name =" value " > NO </ field >

            </ element >

          </ element >

        </ element >

      </ element >

    3. < element name =" bundles " >

      1. < element name =" bundle " >

        1. < field name =" name " > ORIGINAL </ field >

        2. < element name =" bitstreams " >

          1. < element name =" bitstream " >

            1. < field name =" name " > CollectionChallengingMotion.pdf </ field >

            2. < field name =" originalName " > CollectionChallengingMotion.pdf </ field >

            3. < field name =" format " > application/pdf </ field >

            4. < field name =" size " > 6965871 </ field >

            5. < field name =" url " > https://dugi-doc.udg.edu/bitstream/10256/13152/1/CollectionChallengingMotion.pdf </ field >

            6. < field name =" checksum " > 431770cc55cfb7a9d9e3fbac7f90554c </ field >

            7. < field name =" checksumAlgorithm " > MD5 </ field >

            8. < field name =" sid " > 1 </ field >

            </ element >

          </ element >

        </ element >

      2. < element name =" bundle " >

        1. < field name =" name " > LICENSE </ field >

        2. < element name =" bitstreams " >

          1. < element name =" bitstream " >

            1. < field name =" name " > license.txt </ field >

            2. < field name =" originalName " > license.txt </ field >

            3. < field name =" format " > text/plain </ field >

            4. < field name =" size " > 1079 </ field >

            5. < field name =" url " > https://dugi-doc.udg.edu/bitstream/10256/13152/2/license.txt </ field >

            6. < field name =" checksum " > 0d4b4c458d95d1eb4b29247ea5bd4e04 </ field >

            7. < field name =" checksumAlgorithm " > MD5 </ field >

            8. < field name =" sid " > 2 </ field >

            </ element >

          </ element >

        </ element >

      3. < element name =" bundle " >

        1. < field name =" name " > THUMBNAIL </ field >

        2. < element name =" bitstreams " >

          1. < element name =" bitstream " >

            1. < field name =" name " > CollectionChallengingMotion.pdf.jpg </ field >

            2. < field name =" originalName " > CollectionChallengingMotion.pdf.jpg </ field >

            3. < field name =" description " > Generated Thumbnail </ field >

            4. < field name =" format " > image/jpeg </ field >

            5. < field name =" size " > 3325 </ field >

            6. < field name =" url " > https://dugi-doc.udg.edu/bitstream/10256/13152/3/CollectionChallengingMotion.pdf.jpg </ field >

            7. < field name =" checksum " > 9429b58149c2c6500622383b8489393b </ field >

            8. < field name =" checksumAlgorithm " > MD5 </ field >

            9. < field name =" sid " > 3 </ field >

            </ element >

          </ element >

        </ element >

      </ element >

    4. < element name =" others " >

      1. < field name =" handle " > 10256/13152 </ field >

      2. < field name =" identifier " > oai:dugi-doc.udg.edu:10256/13152 </ field >

      3. < field name =" lastModifyDate " > 2024-07-08 12:58:35.598 </ field >

      </ element >

    5. < element name =" repository " >

      1. < field name =" name " > DUGiDocs </ field >

      2. < field name =" mail " > oriol.olive@udg.edu </ field >

      </ element >

    6. < element name =" license " >

      1. < field name =" bin " > Q29uZGljaW9ucyBkZWwgZGlww7JzaXQKCgpQZXIgcG9kZXIgcHVibGljYXIgZWwgZG9jdW1lbnQgYWwgRFVHaSBlbnMgY2FsIHVuYSBhdXRvcml0emFjacOzIHZvc3RyYSBwZXIgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVsIHRleHQgZW4gbGVzIGNvbmRpY2lvbnMgc2Vnw7xlbnRzOgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGEgZGlmb25kcmUsIHB1YmxpY2FyIG8gY29tdW5pY2FyIGVscyBkb2N1bWVudHMsIGRlIGZvcm1hIMOtbnRlZ3JhIG8gcGFyY2lhbCwgc2Vuc2Ugb2J0ZW5pciBjYXAgYmVuZWZpY2kgY29tZXJjaWFsLCDDum5pY2FtZW50IGFtYiBmaW5hbGl0YXRzIGRlIHJlY2VyY2EgaSBzdXBvcnQgbyBpbOKAomx1c3RyYWNpw7MgZGUgbGEgZG9jw6huY2lhLCBtaXRqYW7Dp2FudCBsYSBpbmNvcnBvcmFjacOzIGRlbHMgZG9jdW1lbnRzIGEgdW5hIGJhc2UgZGUgZGFkZXMgZWxlY3Ryw7JuaWNhIGTigJlhY2PDqXMgb2JlcnQuCgoKUGVyIGEgYXF1ZXN0ZXMgZmluYWxpdGF0cyBjZWRlaXhvIGRlIGZvcm1hIG5vIGV4Y2x1c2l2YSwgc2Vuc2UgbMOtbWl0IHRlbXBvcmFsIG5pIHRlcnJpdG9yaWFsLCBlbHMgZHJldHMgZOKAmWV4cGxvdGFjacOzIHF1ZSBlbSBjb3JyZXNwb25lbiBjb20gYSBhdXRvci9hLgoKCi0gQXV0b3JpdHpvIGEgbGEgVW5pdmVyc2l0YXQgZGUgR2lyb25hIGxhIGPDsnBpYSBkZWxzIGRvY3VtZW50cyBlbiB1biBhbHRyZSBzdXBvcnQsIGFkYXB0YXItbG9zIG8gdHJhbnNmb3JtYXItbG9zIGFtYiBmaW5hbGl0YXRzIGRlIGNvbnNlcnZhY2nDsyBvIGRpZnVzacOzLCBpIGzigJlhY29yZCBhbWIgdGVyY2VyZXMgcGVyc29uZXMgcGVyIHJlYWxpdHphciBhcXVlc3RhIGNvbnNlcnZhY2nDsyBpIGRpZnVzacOzIHJlc3BlY3RhbnQgbGEgY2Vzc2nDsyBkZSBkcmV0cyBxdWUgYXJhIGVmZWN0dW8uCgoKLSBFbSByZXNlcnZvIGxhIHJlc3RhIGRlIGRyZXRzIGFscyBxdWFscyBubyBlcyBmYSByZWZlcsOobmNpYSBlbiBlbCBwcmVzZW50IGRvY3VtZW50LgoKCkxhIFVkRyBhZ3JhZWl4IGxhIHZvc3RyYSBjb2zigKJsYWJvcmFjacOzLgo= </ field >

      </ element >

    </ metadata >

Biblioteca de Catalunya Carrer de l'Hospital, 56. 08001 Barcelona Email: catalonica@bnc.cat Tlf.: +34 932 702 300
  • Logotipo de Biblioteca de Catalunya
  • Logotipo de la Generalitat de Catalunya
  • Technical note
  • Legal notice
  • OAI Repository