Brainomics / CrEDIBLE conference report

Cubicweb and the Brainomics project were presented last week at the CrEDIBLE workshop (October 2-4, 2013, Sophia-Antipolis) on "Federating distributed and heterogeneous biomedical data and knowledge". We would like to thank the organizers for this nice opportunity to show the features of CubicWeb and Brainomics in the context of biomedical data.

http://credible.i3s.unice.fr/lib/tpl/credible/images/credible.png

Workshop highlights

  • A short presentation of SHI3LD that defines data access based on conditions that are based on ASK request. The other part was a state of the art of Open data license, and the (poor) existence of licenses expressed in RDF. Future work seems to be an interesting combination of both SHI3LD and RDF-based licenses for data access.
  • MIDAS, an open-source software for sharing medical data. This project could be an interesting source of inspiration for the file sharing part of CubicWeb, even if the (really complicated in my opinion) case of large files downloads is not addressed for now.
  • Federated queries based on FedX - the optimization techniques based on source selection & exclusive groups seems a good approach for avoiding large data transfers and finding some (sub-)optimal ways to join the different data sources. This should be taken into account in the future work on the "FROM" clause in CubicWeb.
  • WebPIE/QueryPIE: a map-reduce-based approach for large-scale reasoning.

CubicWeb and Brainomics

The slides of the presentation can be download as a PDF or viewed on slideshare.

Some people seem confused on the RQL to SQL translation. This relies on a simple translation logic that is implemented in the rql2sql file. This is only an implementation trick, not so different from the one used in RDBMS-based triplestores that have to convert SPARQL into SQL.

RQL inference : there is no magic behind the RQL inference process. As opposed to triplestores that store RDF triples that contain their own schema, and thus cannot easily know the full data model in these triples without looking at all the triples, RQL relies on a relational database with an fixed (at a given moment) data model, thus allowing inference and simple checks. In particular, in this example, we want All the Cities of Île de France with more than 100 000 inhabitants ?, which is expressed in RQL:

Any X WHERE X region Y, X population > 100000, Y uri "http://fr.dbpedia.org/resource/Île-de-France"

and SPARQL:

select ?ville where { ?ville db-owl:region <http://fr.dbpedia.org/resource/Île-de-France> . ?ville db-owl:populationTotal ?population . FILTER (?population > 100000) }

Beside the fact that RQL is less verbose that SPARQL (syntax matters), the simplicity of RQL relies on the fact that it can automatically infer (similarly to SPARQL) that if X is related to Y by the region relation and has a population attribute, it should be a city. If city and district both have the region relation and a population attribute, the RQL inference allows to fetch them both transparently, otherwise one can be specific by using the is relation:

Any X WHERE X is City, X region Y, X population > 100000, Y uri "http://fr.dbpedia.org/resource/Île-de-France"

RQL also allows subqueries, union, full-text search, stored procedures, ... (see the doc).

These really interesting discussions convinced us that we should write a journal paper for detailing the theoretical and technical concepts behind RQL and the YAMS schema.