Joost  Geurts

European Affairs Manager
  • GiantSteps Seven League Boots for Music Creation and Performance.
    The GiantSteps project aims to create the "seven-league boots" for music production in the next decade and beyond. We envision digital musical tools that unleash the creative potential of practitioners by targeting three directions: 1, Developing and integrating musical expert agents. 2, Developing improved interfaces and paradigms for musical human-computer interaction and collaborative control of multi-dimensional parameter spaces. 3, Addressing low cost portable devices. The GiantSteps project unites leading music research institutions (UPF, JKU), industrial R&D companies (Native Instruments, Reactable, JCP-Connect), and music practitioners (STEIM, Red Bull Music Academy), to combine techniques and technologies in new ways, including state of the art interface design techniques with MIR methods new in the areas of real time interaction and creativity. The consortium’s industry partners will guarantee the alignment of these cutting edge technologies with market requirements.
  • CrowdRec Crowd-powered recommendation for continuous digital media access and exchange in social networks.
    Millions of people find the digital media that they want and need via social networks, and rely on recommendations to sort a flood of posts, friends, multimedia and promoted content. Today’s users, however, require a new generation of smart media systems producing feeds that keep pace with their moment-to-moment needs in their fast-moving mobile worlds. Meeting this demand means facing the grand challenge of providing recommendations that are simultaneously real-time, large-scale, socially informed, interactive, and context aware. CrowdRec addresses this challenge by pioneering a breed of algorithms that combine crowdsourcing and recommendation to achieve a new generation of social smartfeeds for access and exchange of digital media in social networks.
  • SocialSensor will develop a new framework for enabling real-time multimedia indexing and search in the Social Web.
    The project will move beyond conventional text-based indexing and retrieval models by mining and aggregating user inputs and content over multiple social networking sites. Social Indexing will incorporate information about the structure and activity of the user's social network directly into the multimedia analysis and search process. Furthermore, it will enhance the multimedia consumption experience by developing novel user-centric media visualization and browsing paradigms. For example, SocialSensor will analyse the dynamic and massive user contributions in order to extract unbiased trending topics and events and will use social connections for improved recommendations.
  • I-SEARCH aims to provide a novel unified framework for multimodal content indexing, sharing, search and retrieval.
    The I-SEARCH framework will be able to handle specific types of multimedia and multimodal content (text, 2D image, sketch, video, 3D objects and audio) alongside with real world information, which can be used as queries and retrieve any available relevant content of any of the aforementioned types.
UTC (2010-2011)
  • C2M C2M is a French project on collaborative authoring of structured multimedia documents.
    Since the documents in C2M are structured according to a specified document model, the documents can be automatically adapted to different output formats. In addition, the explicit structure can be exploited to resolve versioning issues and merge conflicts. The project is based on the open-source Scenari authoring software that has been developed for over 10 year at UTC and has been sucesfully applied in several industrial sectors (see Kelis).
INRIA (2006-2010)
  • CHORUS CHORUS is a European research project about search engine technology in its broadest context. It aims at creating the conditions of mutual information and cross fertilisation between European research effort.
    CHORUS aims at creating the conditions of mutual information exchange and cross fertilisation between European projects in the search-engines domain and the recently launched national and international initiatives in this area. A particular emphasis on setting concrete R&D and industrial objectives for multimedia search in Europe is planned through the implementation of discussion groups limited to selected representatives (industry, academia) and the organisation and on open participation at workshops, conferences and summer schools.
  • VITALASVITALAS is a european R&D project about novel search technologies which address specific demands of large scale professional audio-visual archives, such as broadcasters and press agencies.
    VITALAS is a use-case driven project that aims at providing a pre-industrial prototype system dedicated to intelligent access services to multimedia professional archives that would provide the consumer with new technological functionalities. The project seeks to make novel contributions in cross-media (audio/speech, video, image, text) indexing and content enrichment, and uses different interactive retrieval methods (query refinement with log files, RFB, context adaptation).
CWI (2002 - 2006)
  • Cuypers Cuypers is a research prototype system developed to experiment with the automatic generation of Web-based presentations as an interface to semi-structured multimedia databases.

    It is implemented in SWI-Prolog using a finite domain constraint solver (clp(fd)) and the object-oriented Prolog extension LogTalk.

    We developed three specific scenarios:

    1. ScalAR automatically generates multimedia documents that are adapted for a specific delivery context.The documents are generated based on a fixed set of relationships that are represented in an RDF graph that is stored in a Sesame repository. The relations are converted from a relational database provided by the Rijksmuseum, which the museum uses to populate a part of its website (See demo video).
    2. The SEMINF demonstrator automatically infers semantic relationships between the query results based on the Dublin Core metadata that is associated with the media items in the archive. This metadata is made available through the Open Archives Initiative (OAI), which facilitates interoperability between digital archives. The inferred relationships are then used to automatically generate a multimedia document that conveys these relations to the user.
    3. The DISC use case generates complex multimedia biographies exploiting a large repository of sementic web data.
    Related research project : a Document Engineering Model and Processing Framework for Multimedia Documents
    Electronic documents are different from their traditional counterparts in the sense that they do not have an inherent physical representation. Document engineering uses this notion to automatically adapt the presentation of a document to the context in which it is presented. The document engineering paradigm is particularly well suited for textual documents. Nevertheless, the advantages of document engineering are also desirable for documents which are not based on text-flow, such as time based multimedia and other types of spatio-temporal constrained documents. Existing document engineering technology, however, implicitly assumes that documents are based on text-flow. Some of these assumptions conflict with spatio-temporal constrained documents, which explains why current document engineering tools do not work as well for such documents. In our research we make the underlying assumptions of text-flow based document engineering explicit and study the way these assumptions conflict with spatio-temporal constrained documents. We use this to define requirements for a document engineering model that is independent of implicit text-flow assumptions. The resulting model defines a source document as an explicit representation of the message intended by the author. The transformation rules exploit knowledge about domain, design and discourse in order to convey the intended message effectively and ensure that the result meets the requirements imposed by the presentation context. We have implemented this model by developing an architecture that integrates elements from web, document processing and knowledge intensive architectures.
  • Semantic Web version of the Media Streams cinematography ontology
    This work was done in cooperation with Prof. Marc Davis at the Garage Cinema Research group at the School of Information Management and Systems (SIMS) University of California, Berkeley USA.

    Media Streams is an intelligent annotation tool for digital video material developed by Marc Davis in the early nineties. Part of Media Streams is a proprietary cinematographic ontology embedded in the Media Streams application. Web technology was used to transform the existing video ontology into a commonly accessible format. Recombination of existing video material was then used as an example application, in which the video metadata enables the retrieval of video footage based on both content descriptions and cinematographic concepts, such as establishing and reaction shots.
  • Automatic inference of semantic relationships between Dublin Core annotated media item
    This work was done in cooperation with Prof. Jane Hunter and Suzanne Little at Distributed Systems Technology Centre (DSTC) Brisbane, Australia.

    We developed a search, retrieval and presentation system, which used Dublin Core metadata, describing mixed-media resources, to infer semantic relationships across multiple large archives. These semantic relationships were mapped to spatial temporal relations and conveyed using a multimedia presentation. Our underlying hypothesis was that by using automated computer processing of metadata to organize and combine semantically-related objects within multimedia presentations, the system may be able to generate new knowledge by exposing previously unrecognized connections. In addition, the use of multi-layered information-rich multimedia to present the results enables faster and easier information browsing, analysis, interpretation and deduction by the end-user.